I0306 02:38:59.604733 19 test_context.go:406] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-780690759 I0306 02:38:59.604746 19 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0306 02:38:59.604856 19 e2e.go:109] Starting e2e run "e0336c13-b471-4627-93ef-421cefc2a866" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583462338 - Will randomize all specs Will run 278 of 4843 specs Mar 6 02:38:59.650: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 Mar 6 02:38:59.652: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 6 02:38:59.663: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 6 02:38:59.693: INFO: 22 / 22 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 6 02:38:59.693: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. Mar 6 02:38:59.693: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 6 02:38:59.700: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-amd64' (0 seconds elapsed) Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Mar 6 02:38:59.700: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Mar 6 02:38:59.700: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 6 02:38:59.700: INFO: e2e test version: v1.17.3 Mar 6 02:38:59.705: INFO: kube-apiserver version: v1.17.3 Mar 6 02:38:59.705: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 Mar 6 02:38:59.709: INFO: Cluster IP family: ipv4 SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:38:59.709: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename gc Mar 6 02:38:59.738: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Mar 6 02:38:59.822: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4031 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics Mar 6 02:39:01.023: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:39:01.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready W0306 02:39:01.023932 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. STEP: Destroying namespace "gc-4031" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":1,"skipped":3,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:39:01.029: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-5282 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 6 02:39:15.186: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 6 02:39:15.188: INFO: Pod pod-with-prestop-exec-hook still exists Mar 6 02:39:17.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 6 02:39:17.190: INFO: Pod pod-with-prestop-exec-hook still exists Mar 6 02:39:19.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 6 02:39:19.190: INFO: Pod pod-with-prestop-exec-hook still exists Mar 6 02:39:21.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 6 02:39:21.191: INFO: Pod pod-with-prestop-exec-hook still exists Mar 6 02:39:23.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 6 02:39:23.190: INFO: Pod pod-with-prestop-exec-hook still exists Mar 6 02:39:25.188: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 6 02:39:25.190: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:39:25.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5282" for this suite. • [SLOW TEST:24.187 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":3,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:39:25.217: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename crd-publish-openapi STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6499 STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 6 02:39:25.346: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 6 02:39:33.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 create -f -' Mar 6 02:39:33.436: INFO: stderr: "" Mar 6 02:39:33.436: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 6 02:39:33.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 delete e2e-test-crd-publish-openapi-4043-crds test-cr' Mar 6 02:39:33.545: INFO: stderr: "" Mar 6 02:39:33.545: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 6 02:39:33.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 apply -f -' Mar 6 02:39:33.680: INFO: stderr: "" Mar 6 02:39:33.680: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 6 02:39:33.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6499 delete e2e-test-crd-publish-openapi-4043-crds test-cr' Mar 6 02:39:33.753: INFO: stderr: "" Mar 6 02:39:33.753: INFO: stdout: "e2e-test-crd-publish-openapi-4043-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 6 02:39:33.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-4043-crds' Mar 6 02:39:33.887: INFO: stderr: "" Mar 6 02:39:33.887: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4043-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:39:36.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6499" for this suite. • [SLOW TEST:11.439 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":3,"skipped":3,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:39:36.655: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename webhook STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-618 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 6 02:39:37.141: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 6 02:39:40.159: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API Mar 6 02:39:40.228: INFO: Waiting for webhook configuration to be ready... Mar 6 02:39:50.342: INFO: Waiting for webhook configuration to be ready... Mar 6 02:40:00.439: INFO: Waiting for webhook configuration to be ready... Mar 6 02:40:10.540: INFO: Waiting for webhook configuration to be ready... Mar 6 02:40:20.550: INFO: Waiting for webhook configuration to be ready... Mar 6 02:40:20.550: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0000b3950>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "webhook-618". STEP: Found 6 events. Mar 6 02:40:20.553: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {default-scheduler } Scheduled: Successfully assigned webhook-618/sample-webhook-deployment-5f65f8c764-d2xcb to worker02 Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1 Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d2xcb Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:37 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {kubelet worker02} Created: Created container sample-webhook Mar 6 02:40:20.553: INFO: At 2020-03-06 02:39:38 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d2xcb: {kubelet worker02} Started: Started container sample-webhook Mar 6 02:40:20.555: INFO: POD NODE PHASE GRACE CONDITIONS Mar 6 02:40:20.555: INFO: sample-webhook-deployment-5f65f8c764-d2xcb worker02 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:39:37 +0000 UTC }] Mar 6 02:40:20.556: INFO: Mar 6 02:40:20.558: INFO: Logging node info for node master01 Mar 6 02:40:20.560: INFO: Node Info: &Node{ObjectMeta:{master01 /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 2910 0 2020-03-06 02:29:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:38:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 6 02:40:20.560: INFO: Logging kubelet events for node master01 Mar 6 02:40:20.564: INFO: Logging pods the kubelet thinks is on node master01 Mar 6 02:40:20.576: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.576: INFO: Init container install-cni ready: true, restart count 0 Mar 6 02:40:20.576: INFO: Container kube-flannel ready: true, restart count 0 Mar 6 02:40:20.576: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.576: INFO: Container kube-proxy ready: true, restart count 0 Mar 6 02:40:20.576: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.576: INFO: Container kube-apiserver ready: true, restart count 0 Mar 6 02:40:20.576: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.576: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 6 02:40:20.576: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.576: INFO: Container kube-scheduler ready: true, restart count 1 Mar 6 02:40:20.576: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) Mar 6 02:40:20.576: INFO: Container sonobuoy-worker ready: true, restart count 0 Mar 6 02:40:20.576: INFO: Container systemd-logs ready: true, restart count 0 W0306 02:40:20.579790 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 6 02:40:20.605: INFO: Latency metrics for node master01 Mar 6 02:40:20.605: INFO: Logging node info for node master02 Mar 6 02:40:20.607: INFO: Node Info: &Node{ObjectMeta:{master02 /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 2904 0 2020-03-06 02:29:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 6 02:40:20.607: INFO: Logging kubelet events for node master02 Mar 6 02:40:20.613: INFO: Logging pods the kubelet thinks is on node master02 Mar 6 02:40:20.629: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) Mar 6 02:40:20.629: INFO: Container sonobuoy-worker ready: true, restart count 0 Mar 6 02:40:20.629: INFO: Container systemd-logs ready: true, restart count 0 Mar 6 02:40:20.629: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.629: INFO: Container kube-apiserver ready: true, restart count 0 Mar 6 02:40:20.629: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.629: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 6 02:40:20.629: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.629: INFO: Container kube-scheduler ready: true, restart count 1 Mar 6 02:40:20.629: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.629: INFO: Container kube-proxy ready: true, restart count 0 Mar 6 02:40:20.629: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.629: INFO: Init container install-cni ready: true, restart count 0 Mar 6 02:40:20.629: INFO: Container kube-flannel ready: true, restart count 0 Mar 6 02:40:20.629: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.629: INFO: Container coredns ready: true, restart count 0 W0306 02:40:20.632029 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 6 02:40:20.648: INFO: Latency metrics for node master02 Mar 6 02:40:20.648: INFO: Logging node info for node master03 Mar 6 02:40:20.650: INFO: Node Info: &Node{ObjectMeta:{master03 /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 2903 0 2020-03-06 02:29:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:38:51 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 6 02:40:20.650: INFO: Logging kubelet events for node master03 Mar 6 02:40:20.654: INFO: Logging pods the kubelet thinks is on node master03 Mar 6 02:40:20.664: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container coredns ready: true, restart count 0 Mar 6 02:40:20.664: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) Mar 6 02:40:20.664: INFO: Container sonobuoy-worker ready: true, restart count 0 Mar 6 02:40:20.664: INFO: Container systemd-logs ready: true, restart count 0 Mar 6 02:40:20.664: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container kube-apiserver ready: true, restart count 0 Mar 6 02:40:20.664: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container kube-scheduler ready: true, restart count 1 Mar 6 02:40:20.664: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container kube-proxy ready: true, restart count 0 Mar 6 02:40:20.664: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container kubernetes-dashboard ready: true, restart count 0 Mar 6 02:40:20.664: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container kube-controller-manager ready: true, restart count 1 Mar 6 02:40:20.664: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Init container install-cni ready: true, restart count 0 Mar 6 02:40:20.664: INFO: Container kube-flannel ready: true, restart count 0 Mar 6 02:40:20.664: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.664: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 W0306 02:40:20.667076 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 6 02:40:20.685: INFO: Latency metrics for node master03 Mar 6 02:40:20.685: INFO: Logging node info for node worker01 Mar 6 02:40:20.687: INFO: Node Info: &Node{ObjectMeta:{worker01 /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 3058 0 2020-03-06 02:30:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:39:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 6 02:40:20.687: INFO: Logging kubelet events for node worker01 Mar 6 02:40:20.691: INFO: Logging pods the kubelet thinks is on node worker01 Mar 6 02:40:20.703: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Init container install-cni ready: true, restart count 0 Mar 6 02:40:20.703: INFO: Container kube-flannel ready: true, restart count 1 Mar 6 02:40:20.703: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container contour ready: false, restart count 0 Mar 6 02:40:20.703: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container metrics-server ready: true, restart count 0 Mar 6 02:40:20.703: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container kuard ready: true, restart count 0 Mar 6 02:40:20.703: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container contour ready: false, restart count 0 Mar 6 02:40:20.703: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) Mar 6 02:40:20.703: INFO: Container sonobuoy-worker ready: true, restart count 0 Mar 6 02:40:20.703: INFO: Container systemd-logs ready: true, restart count 0 Mar 6 02:40:20.703: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container kube-proxy ready: true, restart count 0 Mar 6 02:40:20.703: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Init container envoy-initconfig ready: false, restart count 0 Mar 6 02:40:20.703: INFO: Container envoy ready: false, restart count 0 Mar 6 02:40:20.703: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container contour ready: false, restart count 0 Mar 6 02:40:20.703: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container kuard ready: true, restart count 0 Mar 6 02:40:20.703: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.703: INFO: Container kuard ready: true, restart count 0 W0306 02:40:20.707820 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 6 02:40:20.727: INFO: Latency metrics for node worker01 Mar 6 02:40:20.727: INFO: Logging node info for node worker02 Mar 6 02:40:20.729: INFO: Node Info: &Node{ObjectMeta:{worker02 /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 3056 0 2020-03-06 02:30:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:39:16 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 6 02:40:20.729: INFO: Logging kubelet events for node worker02 Mar 6 02:40:20.733: INFO: Logging pods the kubelet thinks is on node worker02 Mar 6 02:40:20.737: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.737: INFO: Container kube-proxy ready: true, restart count 1 Mar 6 02:40:20.737: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.737: INFO: Init container envoy-initconfig ready: false, restart count 0 Mar 6 02:40:20.737: INFO: Container envoy ready: false, restart count 0 Mar 6 02:40:20.737: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.737: INFO: Container kube-sonobuoy ready: true, restart count 0 Mar 6 02:40:20.737: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) Mar 6 02:40:20.737: INFO: Container e2e ready: true, restart count 0 Mar 6 02:40:20.737: INFO: Container sonobuoy-worker ready: true, restart count 0 Mar 6 02:40:20.737: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded) Mar 6 02:40:20.737: INFO: Container sonobuoy-worker ready: true, restart count 0 Mar 6 02:40:20.737: INFO: Container systemd-logs ready: true, restart count 0 Mar 6 02:40:20.737: INFO: sample-webhook-deployment-5f65f8c764-d2xcb started at 2020-03-06 02:39:37 +0000 UTC (0+1 container statuses recorded) Mar 6 02:40:20.737: INFO: Container sample-webhook ready: true, restart count 0 Mar 6 02:40:20.737: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded) Mar 6 02:40:20.737: INFO: Init container install-cni ready: true, restart count 0 Mar 6 02:40:20.737: INFO: Container kube-flannel ready: true, restart count 0 W0306 02:40:20.740228 19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 6 02:40:20.756: INFO: Latency metrics for node worker02 Mar 6 02:40:20.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-618" for this suite. STEP: Destroying namespace "webhook-618-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • Failure [44.182 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] [It] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 6 02:40:20.550: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0000b3950>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2225 ------------------------------ {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":3,"skipped":22,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:40:20.837: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename emptydir STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8781 STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 6 02:40:28.988: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8781 PodName:pod-sharedvolume-12be2fd0-e4c9-4a25-a178-e5a4766a478e ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 6 02:40:28.988: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 Mar 6 02:40:29.088: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:40:29.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8781" for this suite. • [SLOW TEST:8.259 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":4,"skipped":27,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:40:29.097: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename pods STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-1103 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 6 02:40:29.233: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:40:31.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1103" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":115,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:40:31.268: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename projected STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1491 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 6 02:40:31.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556" in namespace "projected-1491" to be "success or failure" Mar 6 02:40:31.408: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882219ms Mar 6 02:40:33.410: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005129711s Mar 6 02:40:35.413: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00833495s STEP: Saw pod success Mar 6 02:40:35.413: INFO: Pod "downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556" satisfied condition "success or failure" Mar 6 02:40:35.417: INFO: Trying to get logs from node worker02 pod downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556 container client-container: STEP: delete the pod Mar 6 02:40:35.431: INFO: Waiting for pod downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556 to disappear Mar 6 02:40:35.433: INFO: Pod downwardapi-volume-30b532f3-5ef9-493b-b591-e88207b8e556 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:40:35.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1491" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":140,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:40:35.440: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename kubelet-test STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-8281 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:40:37.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8281" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":151,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:40:37.591: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename job STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-521 STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-521, will wait for the garbage collector to delete the pods Mar 6 02:40:39.795: INFO: Deleting Job.batch foo took: 4.924021ms Mar 6 02:40:39.895: INFO: Terminating Job.batch foo pods took: 100.104439ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:41:13.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-521" for this suite. • [SLOW TEST:36.216 seconds] [sig-apps] Job /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":8,"skipped":154,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:41:13.808: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename container-runtime STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-3485 STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 6 02:41:35.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3485" for this suite. • [SLOW TEST:21.299 seconds] [k8s.io] Container Runtime /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":162,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 6 02:41:35.106: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759 STEP: Building a namespace api object, basename proxy STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-5380 STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 6 02:41:35.248: INFO: (0) /api/v1/nodes/worker01/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3421
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 02:41:35.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034" in namespace "projected-3421" to be "success or failure"
Mar  6 02:41:35.440: INFO: Pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017076ms
Mar  6 02:41:37.449: INFO: Pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011726889s
STEP: Saw pod success
Mar  6 02:41:37.449: INFO: Pod "downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034" satisfied condition "success or failure"
Mar  6 02:41:37.451: INFO: Trying to get logs from node worker02 pod downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034 container client-container: 
STEP: delete the pod
Mar  6 02:41:37.466: INFO: Waiting for pod downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034 to disappear
Mar  6 02:41:37.468: INFO: Pod downwardapi-volume-4a53e79d-5aa3-4060-bde4-99351516b034 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:41:37.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3421" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":177,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:41:37.476: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5796
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-ba9c3339-ddd4-4172-bced-4deb82a408be
STEP: Creating a pod to test consume secrets
Mar  6 02:41:37.625: INFO: Waiting up to 5m0s for pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac" in namespace "secrets-5796" to be "success or failure"
Mar  6 02:41:37.628: INFO: Pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958754ms
Mar  6 02:41:39.630: INFO: Pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005197416s
STEP: Saw pod success
Mar  6 02:41:39.630: INFO: Pod "pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac" satisfied condition "success or failure"
Mar  6 02:41:39.633: INFO: Trying to get logs from node worker02 pod pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac container secret-volume-test: 
STEP: delete the pod
Mar  6 02:41:39.651: INFO: Waiting for pod pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac to disappear
Mar  6 02:41:39.653: INFO: Pod pod-secrets-9a5b3e81-145f-442b-b423-e071bbe1d8ac no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:41:39.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5796" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":208,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:41:39.660: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-4019
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-sjwb
STEP: Creating a pod to test atomic-volume-subpath
Mar  6 02:41:39.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sjwb" in namespace "subpath-4019" to be "success or failure"
Mar  6 02:41:39.806: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087472ms
Mar  6 02:41:41.810: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 2.006015461s
Mar  6 02:41:43.812: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 4.00825221s
Mar  6 02:41:45.818: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 6.014478492s
Mar  6 02:41:47.821: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 8.017060425s
Mar  6 02:41:49.823: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 10.01937165s
Mar  6 02:41:51.826: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 12.022078842s
Mar  6 02:41:53.828: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 14.024501637s
Mar  6 02:41:55.831: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 16.027064662s
Mar  6 02:41:57.837: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 18.032955461s
Mar  6 02:41:59.839: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Running", Reason="", readiness=true. Elapsed: 20.035551681s
Mar  6 02:42:01.843: INFO: Pod "pod-subpath-test-configmap-sjwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.038928148s
STEP: Saw pod success
Mar  6 02:42:01.843: INFO: Pod "pod-subpath-test-configmap-sjwb" satisfied condition "success or failure"
Mar  6 02:42:01.847: INFO: Trying to get logs from node worker02 pod pod-subpath-test-configmap-sjwb container test-container-subpath-configmap-sjwb: 
STEP: delete the pod
Mar  6 02:42:01.859: INFO: Waiting for pod pod-subpath-test-configmap-sjwb to disappear
Mar  6 02:42:01.862: INFO: Pod pod-subpath-test-configmap-sjwb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-sjwb
Mar  6 02:42:01.862: INFO: Deleting pod "pod-subpath-test-configmap-sjwb" in namespace "subpath-4019"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:42:01.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4019" for this suite.

• [SLOW TEST:22.211 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":13,"skipped":208,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:42:01.871: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-2710
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar  6 02:42:01.998: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:42:04.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2710" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":14,"skipped":216,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:42:04.645: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-5732
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:04.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5732" for this suite.

• [SLOW TEST:60.155 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":217,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:04.800: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3806
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-b5491ee0-8a7d-412f-83d5-89c566538d7e
STEP: Creating a pod to test consume secrets
Mar  6 02:43:04.949: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf" in namespace "projected-3806" to be "success or failure"
Mar  6 02:43:04.952: INFO: Pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166774ms
Mar  6 02:43:06.954: INFO: Pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004795253s
STEP: Saw pod success
Mar  6 02:43:06.954: INFO: Pod "pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf" satisfied condition "success or failure"
Mar  6 02:43:06.957: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf container projected-secret-volume-test: 
STEP: delete the pod
Mar  6 02:43:06.971: INFO: Waiting for pod pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf to disappear
Mar  6 02:43:06.973: INFO: Pod pod-projected-secrets-4fa41554-cb10-4ffd-8968-1f369bbdbcaf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:06.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3806" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":251,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:06.982: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7589
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Mar  6 02:43:07.137: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4255 0 2020-03-06 02:43:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar  6 02:43:07.137: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4256 0 2020-03-06 02:43:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Mar  6 02:43:07.137: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4257 0 2020-03-06 02:43:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Mar  6 02:43:17.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4313 0 2020-03-06 02:43:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar  6 02:43:17.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4314 0 2020-03-06 02:43:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Mar  6 02:43:17.157: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-7589 /api/v1/namespaces/watch-7589/configmaps/e2e-watch-test-label-changed 1c0bab8e-4585-405d-b0d1-f04d7fe5e21b 4315 0 2020-03-06 02:43:07 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:17.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7589" for this suite.

• [SLOW TEST:10.181 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":17,"skipped":271,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:17.163: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename proxy
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-7131
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-jw2qq in namespace proxy-7131
I0306 02:43:17.317911      19 runners.go:189] Created replication controller with name: proxy-service-jw2qq, namespace: proxy-7131, replica count: 1
I0306 02:43:18.368189      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0306 02:43:19.368340      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:20.368469      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:21.368603      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:22.368727      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:23.368849      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:24.368993      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:25.369135      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:26.369259      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0306 02:43:27.369402      19 runners.go:189] proxy-service-jw2qq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar  6 02:43:27.372: INFO: setup took 10.079385495s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Mar  6 02:43:27.383: INFO: (0) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 10.75768ms)
Mar  6 02:43:27.383: INFO: (0) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 10.638706ms)
Mar  6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 11.383987ms)
Mar  6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 11.956412ms)
Mar  6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 12.29239ms)
Mar  6 02:43:27.384: INFO: (0) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 30.010982ms)
Mar  6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 29.562596ms)
Mar  6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 29.677109ms)
Mar  6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 30.114429ms)
Mar  6 02:43:27.402: INFO: (0) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 29.799644ms)
Mar  6 02:43:27.404: INFO: (0) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 31.632081ms)
Mar  6 02:43:27.408: INFO: (1) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.661707ms)
Mar  6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 5.448258ms)
Mar  6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 6.079197ms)
Mar  6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 6.153358ms)
Mar  6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 5.414763ms)
Mar  6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 5.943711ms)
Mar  6 02:43:27.410: INFO: (1) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.112722ms)
Mar  6 02:43:27.411: INFO: (1) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.46139ms)
Mar  6 02:43:27.411: INFO: (1) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 6.228202ms)
Mar  6 02:43:27.414: INFO: (1) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.514525ms)
Mar  6 02:43:27.421: INFO: (1) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: ... (200; 16.919811ms)
Mar  6 02:43:27.422: INFO: (1) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 17.695105ms)
Mar  6 02:43:27.422: INFO: (1) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 17.082024ms)
Mar  6 02:43:27.423: INFO: (1) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 17.987299ms)
Mar  6 02:43:27.425: INFO: (2) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 6.894675ms)
Mar  6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.447179ms)
Mar  6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 7.319094ms)
Mar  6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.19389ms)
Mar  6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 6.921901ms)
Mar  6 02:43:27.430: INFO: (2) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.572049ms)
Mar  6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 7.50989ms)
Mar  6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 7.264885ms)
Mar  6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 7.801208ms)
Mar  6 02:43:27.431: INFO: (2) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 7.342208ms)
Mar  6 02:43:27.432: INFO: (2) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 8.237935ms)
Mar  6 02:43:27.433: INFO: (2) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 9.481511ms)
Mar  6 02:43:27.437: INFO: (3) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 4.023759ms)
Mar  6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.598598ms)
Mar  6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.268182ms)
Mar  6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 6.138413ms)
Mar  6 02:43:27.439: INFO: (3) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 6.923995ms)
Mar  6 02:43:27.440: INFO: (3) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 7.926913ms)
Mar  6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 7.978309ms)
Mar  6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.144636ms)
Mar  6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 8.594394ms)
Mar  6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.48441ms)
Mar  6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.063273ms)
Mar  6 02:43:27.441: INFO: (3) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 8.398984ms)
Mar  6 02:43:27.442: INFO: (3) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.807832ms)
Mar  6 02:43:27.442: INFO: (3) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.637109ms)
Mar  6 02:43:27.446: INFO: (4) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 4.162762ms)
Mar  6 02:43:27.447: INFO: (4) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 4.431647ms)
Mar  6 02:43:27.449: INFO: (4) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.720341ms)
Mar  6 02:43:27.449: INFO: (4) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 7.332812ms)
Mar  6 02:43:27.450: INFO: (4) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 7.360394ms)
Mar  6 02:43:27.451: INFO: (4) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 8.469276ms)
Mar  6 02:43:27.451: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 9.043979ms)
Mar  6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 10.055199ms)
Mar  6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 9.776144ms)
Mar  6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 10.460658ms)
Mar  6 02:43:27.452: INFO: (4) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 9.760661ms)
Mar  6 02:43:27.453: INFO: (4) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 9.926909ms)
Mar  6 02:43:27.453: INFO: (4) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 10.062458ms)
Mar  6 02:43:27.454: INFO: (4) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 11.209613ms)
Mar  6 02:43:27.458: INFO: (5) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 3.776935ms)
Mar  6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 5.475411ms)
Mar  6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 4.795473ms)
Mar  6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 4.696641ms)
Mar  6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 5.441649ms)
Mar  6 02:43:27.459: INFO: (5) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 4.892401ms)
Mar  6 02:43:27.460: INFO: (5) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 5.158743ms)
Mar  6 02:43:27.460: INFO: (5) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 5.463738ms)
Mar  6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.281617ms)
Mar  6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.521791ms)
Mar  6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.473404ms)
Mar  6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 8.145207ms)
Mar  6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.499329ms)
Mar  6 02:43:27.463: INFO: (5) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.064096ms)
Mar  6 02:43:27.465: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 2.005048ms)
Mar  6 02:43:27.466: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 2.993281ms)
Mar  6 02:43:27.466: INFO: (6) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: ... (200; 5.603291ms)
Mar  6 02:43:27.469: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.508694ms)
Mar  6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.508064ms)
Mar  6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 7.684856ms)
Mar  6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 8.132945ms)
Mar  6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.762546ms)
Mar  6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.490046ms)
Mar  6 02:43:27.471: INFO: (6) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.617804ms)
Mar  6 02:43:27.475: INFO: (6) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 10.944438ms)
Mar  6 02:43:27.475: INFO: (6) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 11.482834ms)
Mar  6 02:43:27.479: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 4.385047ms)
Mar  6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 6.143238ms)
Mar  6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 6.001582ms)
Mar  6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 5.870819ms)
Mar  6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.510248ms)
Mar  6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: ... (200; 5.719785ms)
Mar  6 02:43:27.481: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.958845ms)
Mar  6 02:43:27.482: INFO: (7) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 7.348539ms)
Mar  6 02:43:27.482: INFO: (7) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 7.236095ms)
Mar  6 02:43:27.482: INFO: (7) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 7.320716ms)
Mar  6 02:43:27.483: INFO: (7) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.980397ms)
Mar  6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.102288ms)
Mar  6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.346771ms)
Mar  6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.084973ms)
Mar  6 02:43:27.484: INFO: (7) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.113267ms)
Mar  6 02:43:27.487: INFO: (8) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 2.684431ms)
Mar  6 02:43:27.487: INFO: (8) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 2.620382ms)
Mar  6 02:43:27.488: INFO: (8) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 2.904889ms)
Mar  6 02:43:27.488: INFO: (8) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.244387ms)
Mar  6 02:43:27.489: INFO: (8) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 4.033585ms)
Mar  6 02:43:27.489: INFO: (8) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 3.737147ms)
Mar  6 02:43:27.490: INFO: (8) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 5.36037ms)
Mar  6 02:43:27.490: INFO: (8) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 5.145491ms)
Mar  6 02:43:27.491: INFO: (8) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 5.788993ms)
Mar  6 02:43:27.491: INFO: (8) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 7.415157ms)
Mar  6 02:43:27.493: INFO: (8) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 8.558449ms)
Mar  6 02:43:27.493: INFO: (8) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.473915ms)
Mar  6 02:43:27.494: INFO: (8) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 9.1008ms)
Mar  6 02:43:27.495: INFO: (8) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.511101ms)
Mar  6 02:43:27.501: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 6.264267ms)
Mar  6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.303027ms)
Mar  6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.379904ms)
Mar  6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 6.458649ms)
Mar  6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.780195ms)
Mar  6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.103348ms)
Mar  6 02:43:27.502: INFO: (9) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 7.588887ms)
Mar  6 02:43:27.504: INFO: (9) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.615904ms)
Mar  6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.326655ms)
Mar  6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 9.769996ms)
Mar  6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 9.740679ms)
Mar  6 02:43:27.505: INFO: (9) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 9.857813ms)
Mar  6 02:43:27.509: INFO: (9) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 13.645418ms)
Mar  6 02:43:27.509: INFO: (9) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 13.037259ms)
Mar  6 02:43:27.509: INFO: (9) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 13.612125ms)
Mar  6 02:43:27.515: INFO: (10) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 5.594066ms)
Mar  6 02:43:27.516: INFO: (10) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.922942ms)
Mar  6 02:43:27.516: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 6.597625ms)
Mar  6 02:43:27.516: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.372118ms)
Mar  6 02:43:27.517: INFO: (10) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 13.584456ms)
Mar  6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 13.11182ms)
Mar  6 02:43:27.523: INFO: (10) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 13.447437ms)
Mar  6 02:43:27.524: INFO: (10) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 13.964067ms)
Mar  6 02:43:27.527: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 2.953805ms)
Mar  6 02:43:27.527: INFO: (11) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 3.116933ms)
Mar  6 02:43:27.528: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 3.03086ms)
Mar  6 02:43:27.528: INFO: (11) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.091431ms)
Mar  6 02:43:27.530: INFO: (11) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 5.712985ms)
Mar  6 02:43:27.530: INFO: (11) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 5.67844ms)
Mar  6 02:43:27.531: INFO: (11) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 6.084747ms)
Mar  6 02:43:27.531: INFO: (11) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.338116ms)
Mar  6 02:43:27.531: INFO: (11) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 6.087171ms)
Mar  6 02:43:27.532: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 6.811328ms)
Mar  6 02:43:27.532: INFO: (11) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.20696ms)
Mar  6 02:43:27.532: INFO: (11) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: ... (200; 7.658941ms)
Mar  6 02:43:27.533: INFO: (11) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.395955ms)
Mar  6 02:43:27.535: INFO: (11) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 9.356815ms)
Mar  6 02:43:27.535: INFO: (11) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 9.685836ms)
Mar  6 02:43:27.538: INFO: (12) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 2.983573ms)
Mar  6 02:43:27.539: INFO: (12) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 4.460149ms)
Mar  6 02:43:27.539: INFO: (12) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 5.508753ms)
Mar  6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 5.616069ms)
Mar  6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 5.457181ms)
Mar  6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 5.917445ms)
Mar  6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 5.837701ms)
Mar  6 02:43:27.541: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 5.741329ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.39232ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.479814ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 7.9654ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.343281ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 8.229582ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.309626ms)
Mar  6 02:43:27.543: INFO: (12) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.203791ms)
Mar  6 02:43:27.549: INFO: (13) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 5.369043ms)
Mar  6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.366454ms)
Mar  6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 6.132908ms)
Mar  6 02:43:27.550: INFO: (13) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 8.039559ms)
Mar  6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.36915ms)
Mar  6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 8.794062ms)
Mar  6 02:43:27.552: INFO: (13) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.951988ms)
Mar  6 02:43:27.553: INFO: (13) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.43591ms)
Mar  6 02:43:27.556: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 2.751228ms)
Mar  6 02:43:27.557: INFO: (14) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 3.268472ms)
Mar  6 02:43:27.557: INFO: (14) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 3.456627ms)
Mar  6 02:43:27.557: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.541233ms)
Mar  6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 4.68479ms)
Mar  6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 4.872894ms)
Mar  6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 4.362031ms)
Mar  6 02:43:27.559: INFO: (14) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 4.66183ms)
Mar  6 02:43:27.560: INFO: (14) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 5.313539ms)
Mar  6 02:43:27.560: INFO: (14) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 5.722675ms)
Mar  6 02:43:27.562: INFO: (14) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 8.180861ms)
Mar  6 02:43:27.562: INFO: (14) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.064862ms)
Mar  6 02:43:27.563: INFO: (14) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 5.711809ms)
Mar  6 02:43:27.569: INFO: (15) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 5.900699ms)
Mar  6 02:43:27.570: INFO: (15) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.051572ms)
Mar  6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 7.606199ms)
Mar  6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.802908ms)
Mar  6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 7.453391ms)
Mar  6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 7.636269ms)
Mar  6 02:43:27.571: INFO: (15) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 7.647343ms)
Mar  6 02:43:27.572: INFO: (15) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.281613ms)
Mar  6 02:43:27.572: INFO: (15) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 8.234843ms)
Mar  6 02:43:27.576: INFO: (16) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 3.677909ms)
Mar  6 02:43:27.576: INFO: (16) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 6.037482ms)
Mar  6 02:43:27.578: INFO: (16) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 5.722077ms)
Mar  6 02:43:27.578: INFO: (16) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.55223ms)
Mar  6 02:43:27.579: INFO: (16) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 6.635868ms)
Mar  6 02:43:27.579: INFO: (16) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.146482ms)
Mar  6 02:43:27.579: INFO: (16) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 6.8989ms)
Mar  6 02:43:27.580: INFO: (16) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 7.633971ms)
Mar  6 02:43:27.580: INFO: (16) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 7.727348ms)
Mar  6 02:43:27.581: INFO: (16) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 8.027032ms)
Mar  6 02:43:27.581: INFO: (16) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.384209ms)
Mar  6 02:43:27.581: INFO: (16) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 8.190723ms)
Mar  6 02:43:27.582: INFO: (16) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 9.317653ms)
Mar  6 02:43:27.582: INFO: (16) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 9.447303ms)
Mar  6 02:43:27.584: INFO: (17) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 2.239256ms)
Mar  6 02:43:27.584: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 2.184016ms)
Mar  6 02:43:27.586: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 2.926124ms)
Mar  6 02:43:27.586: INFO: (17) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 3.219131ms)
Mar  6 02:43:27.587: INFO: (17) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 3.80036ms)
Mar  6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 5.77971ms)
Mar  6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 6.471558ms)
Mar  6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 6.703507ms)
Mar  6 02:43:27.589: INFO: (17) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 6.428621ms)
Mar  6 02:43:27.590: INFO: (17) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.69784ms)
Mar  6 02:43:27.590: INFO: (17) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 8.124954ms)
Mar  6 02:43:27.591: INFO: (17) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test<... (200; 3.509488ms)
Mar  6 02:43:27.596: INFO: (18) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: ... (200; 3.656624ms)
Mar  6 02:43:27.598: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 4.978599ms)
Mar  6 02:43:27.598: INFO: (18) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 5.833629ms)
Mar  6 02:43:27.599: INFO: (18) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:460/proxy/: tls baz (200; 5.896292ms)
Mar  6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 6.981053ms)
Mar  6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:162/proxy/: bar (200; 7.067662ms)
Mar  6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 7.282525ms)
Mar  6 02:43:27.600: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp/proxy/: test (200; 7.616993ms)
Mar  6 02:43:27.601: INFO: (18) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.596923ms)
Mar  6 02:43:27.601: INFO: (18) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 8.401555ms)
Mar  6 02:43:27.601: INFO: (18) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 8.786507ms)
Mar  6 02:43:27.602: INFO: (18) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 8.980343ms)
Mar  6 02:43:27.611: INFO: (19) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:443/proxy/: test (200; 9.21287ms)
Mar  6 02:43:27.612: INFO: (19) /api/v1/namespaces/proxy-7131/pods/http:proxy-service-jw2qq-p6vtp:1080/proxy/: ... (200; 9.314835ms)
Mar  6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/pods/https:proxy-service-jw2qq-p6vtp:462/proxy/: tls qux (200; 10.20391ms)
Mar  6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:160/proxy/: foo (200; 10.335882ms)
Mar  6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/pods/proxy-service-jw2qq-p6vtp:1080/proxy/: test<... (200; 10.279634ms)
Mar  6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname2/proxy/: tls qux (200; 11.108106ms)
Mar  6 02:43:27.613: INFO: (19) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname1/proxy/: foo (200; 11.098982ms)
Mar  6 02:43:27.614: INFO: (19) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname1/proxy/: foo (200; 11.630253ms)
Mar  6 02:43:27.615: INFO: (19) /api/v1/namespaces/proxy-7131/services/proxy-service-jw2qq:portname2/proxy/: bar (200; 13.247151ms)
Mar  6 02:43:27.621: INFO: (19) /api/v1/namespaces/proxy-7131/services/http:proxy-service-jw2qq:portname2/proxy/: bar (200; 19.348249ms)
Mar  6 02:43:27.622: INFO: (19) /api/v1/namespaces/proxy-7131/services/https:proxy-service-jw2qq:tlsportname1/proxy/: tls baz (200; 20.210076ms)
STEP: deleting ReplicationController proxy-service-jw2qq in namespace proxy-7131, will wait for the garbage collector to delete the pods
Mar  6 02:43:27.679: INFO: Deleting ReplicationController proxy-service-jw2qq took: 4.632894ms
Mar  6 02:43:28.279: INFO: Terminating ReplicationController proxy-service-jw2qq pods took: 600.155129ms
[AfterEach] version v1
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:35.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7131" for this suite.

• [SLOW TEST:18.022 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":18,"skipped":282,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:35.186: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-3457
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Mar  6 02:43:35.325: INFO: Pod name pod-release: Found 0 pods out of 1
Mar  6 02:43:40.335: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:41.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3457" for this suite.

• [SLOW TEST:6.212 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":19,"skipped":308,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:41.398: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-430
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:41.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-430" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":20,"skipped":331,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:41.563: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8136
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:41.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8136" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":21,"skipped":369,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:41.722: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9454
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar  6 02:43:44.469: INFO: Successfully updated pod "annotationupdate8dccc44e-23b2-438e-818c-7e0ea5200f23"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:48.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9454" for this suite.

• [SLOW TEST:6.790 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":371,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:48.512: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-742
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-4924016d-63ed-44f9-9335-0167571a0a67
STEP: Creating a pod to test consume secrets
Mar  6 02:43:48.663: INFO: Waiting up to 5m0s for pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87" in namespace "secrets-742" to be "success or failure"
Mar  6 02:43:48.665: INFO: Pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.858025ms
Mar  6 02:43:50.669: INFO: Pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006411129s
STEP: Saw pod success
Mar  6 02:43:50.669: INFO: Pod "pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87" satisfied condition "success or failure"
Mar  6 02:43:50.672: INFO: Trying to get logs from node worker02 pod pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87 container secret-volume-test: 
STEP: delete the pod
Mar  6 02:43:50.686: INFO: Waiting for pod pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87 to disappear
Mar  6 02:43:50.689: INFO: Pod pod-secrets-e6da0ef9-6fed-40e2-bf61-61d7a43f9b87 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:50.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-742" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":419,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:50.699: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-7010
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:43:50.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 version'
Mar  6 02:43:50.894: INFO: stderr: ""
Mar  6 02:43:50.894: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T18:14:22Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T18:07:13Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:43:50.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7010" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":24,"skipped":423,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:43:50.904: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6960
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6960
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-6960
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6960
Mar  6 02:43:51.049: INFO: Found 0 stateful pods, waiting for 1
Mar  6 02:44:01.053: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Mar  6 02:44:01.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 02:44:01.237: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 02:44:01.237: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 02:44:01.237: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 02:44:01.239: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar  6 02:44:11.242: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 02:44:11.242: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 02:44:11.252: INFO: POD   NODE      PHASE    GRACE  CONDITIONS
Mar  6 02:44:11.252: INFO: ss-0  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC  }]
Mar  6 02:44:11.252: INFO: 
Mar  6 02:44:11.252: INFO: StatefulSet ss has not reached scale 3, at 1
Mar  6 02:44:12.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99791637s
Mar  6 02:44:13.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99504123s
Mar  6 02:44:14.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.992370488s
Mar  6 02:44:15.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.989731536s
Mar  6 02:44:16.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.986861957s
Mar  6 02:44:17.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.984250986s
Mar  6 02:44:18.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.981284768s
Mar  6 02:44:19.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.978504068s
Mar  6 02:44:20.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 975.959906ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6960
Mar  6 02:44:21.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 02:44:21.455: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 02:44:21.455: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 02:44:21.455: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 02:44:21.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 02:44:21.576: INFO: rc: 1
Mar  6 02:44:21.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar  6 02:44:31.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 02:44:31.753: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Mar  6 02:44:31.753: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 02:44:31.753: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 02:44:31.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 02:44:31.922: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Mar  6 02:44:31.922: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 02:44:31.922: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 02:44:31.926: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 02:44:31.926: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 02:44:31.926: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Mar  6 02:44:31.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 02:44:32.141: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 02:44:32.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 02:44:32.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 02:44:32.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 02:44:32.355: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 02:44:32.355: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 02:44:32.355: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 02:44:32.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-6960 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 02:44:32.529: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 02:44:32.529: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 02:44:32.529: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 02:44:32.529: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 02:44:32.532: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Mar  6 02:44:42.537: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 02:44:42.537: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 02:44:42.537: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 02:44:42.545: INFO: POD   NODE      PHASE    GRACE  CONDITIONS
Mar  6 02:44:42.545: INFO: ss-0  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC  }]
Mar  6 02:44:42.545: INFO: ss-1  worker01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  }]
Mar  6 02:44:42.545: INFO: ss-2  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  }]
Mar  6 02:44:42.545: INFO: 
Mar  6 02:44:42.545: INFO: StatefulSet ss has not reached scale 0, at 3
Mar  6 02:44:43.548: INFO: POD   NODE      PHASE    GRACE  CONDITIONS
Mar  6 02:44:43.548: INFO: ss-0  worker02  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:43:51 +0000 UTC  }]
Mar  6 02:44:43.548: INFO: ss-1  worker01  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  }]
Mar  6 02:44:43.548: INFO: ss-2  worker02  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:44:11 +0000 UTC  }]
Mar  6 02:44:43.548: INFO: 
Mar  6 02:44:43.548: INFO: StatefulSet ss has not reached scale 0, at 3
Mar  6 02:44:44.550: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.994853321s
Mar  6 02:44:45.552: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.992371171s
Mar  6 02:44:46.555: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.990157101s
Mar  6 02:44:47.557: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.987594658s
Mar  6 02:44:48.560: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.985152476s
Mar  6 02:44:49.562: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.98281703s
Mar  6 02:44:50.565: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.980113327s
Mar  6 02:44:51.567: INFO: Verifying statefulset ss doesn't scale past 0 for another 977.904636ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6960
Mar  6 02:44:52.570: INFO: Scaling statefulset ss to 0
Mar  6 02:44:52.576: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar  6 02:44:52.578: INFO: Deleting all statefulset in ns statefulset-6960
Mar  6 02:44:52.580: INFO: Scaling statefulset ss to 0
Mar  6 02:44:52.586: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 02:44:52.587: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:44:52.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6960" for this suite.

• [SLOW TEST:61.698 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":25,"skipped":465,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:44:52.603: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-8872
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-e04aa3bf-2c1a-451b-88e3-9b5a7ac027c5 in namespace container-probe-8872
Mar  6 02:44:54.745: INFO: Started pod busybox-e04aa3bf-2c1a-451b-88e3-9b5a7ac027c5 in namespace container-probe-8872
STEP: checking the pod's current state and verifying that restartCount is present
Mar  6 02:44:54.747: INFO: Initial restart count of pod busybox-e04aa3bf-2c1a-451b-88e3-9b5a7ac027c5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:48:55.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8872" for this suite.

• [SLOW TEST:242.521 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":468,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:48:55.124: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-9953
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:48:57.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9953" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":472,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:48:57.297: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-9937
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9937
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Mar  6 02:48:57.523: INFO: Found 0 stateful pods, waiting for 3
Mar  6 02:49:07.526: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 02:49:07.526: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 02:49:07.526: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 02:49:07.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 02:49:07.724: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 02:49:07.724: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 02:49:07.724: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar  6 02:49:17.751: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Mar  6 02:49:27.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 02:49:27.940: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 02:49:27.940: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 02:49:27.940: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 02:49:37.953: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update
Mar  6 02:49:37.953: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar  6 02:49:37.953: INFO: Waiting for Pod statefulset-9937/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar  6 02:49:47.959: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update
Mar  6 02:49:47.959: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar  6 02:49:47.959: INFO: Waiting for Pod statefulset-9937/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar  6 02:49:57.958: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update
Mar  6 02:49:57.958: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar  6 02:50:07.958: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update
Mar  6 02:50:07.958: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Rolling back to a previous revision
Mar  6 02:50:17.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 02:50:18.294: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 02:50:18.294: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 02:50:18.294: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 02:50:28.322: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Mar  6 02:50:38.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-9937 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 02:50:38.518: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 02:50:38.518: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 02:50:38.518: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 02:50:58.531: INFO: Waiting for StatefulSet statefulset-9937/ss2 to complete update
Mar  6 02:50:58.531: INFO: Waiting for Pod statefulset-9937/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar  6 02:51:08.536: INFO: Deleting all statefulset in ns statefulset-9937
Mar  6 02:51:08.538: INFO: Scaling statefulset ss2 to 0
Mar  6 02:51:28.548: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 02:51:28.551: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:51:28.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9937" for this suite.

• [SLOW TEST:151.270 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":28,"skipped":480,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:51:28.567: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-7757
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Mar  6 02:51:28.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6448 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar  6 02:51:28.700: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6448 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Mar  6 02:51:38.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6543 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Mar  6 02:51:38.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6543 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Mar  6 02:51:48.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6572 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar  6 02:51:48.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6572 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Mar  6 02:51:58.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6601 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar  6 02:51:58.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-a 4f40eb2b-59eb-4b6f-9803-b4845fb27fe9 6601 0 2020-03-06 02:51:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Mar  6 02:52:08.730: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6630 0 2020-03-06 02:52:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar  6 02:52:08.730: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6630 0 2020-03-06 02:52:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Mar  6 02:52:18.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6659 0 2020-03-06 02:52:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar  6 02:52:18.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7757 /api/v1/namespaces/watch-7757/configmaps/e2e-watch-test-configmap-b 3249b713-6008-4219-8f77-6469c3a14ca6 6659 0 2020-03-06 02:52:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:52:28.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7757" for this suite.

• [SLOW TEST:60.177 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":29,"skipped":516,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:52:28.744: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-5712
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e
Mar  6 02:52:28.883: INFO: Pod name my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e: Found 0 pods out of 1
Mar  6 02:52:33.885: INFO: Pod name my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e: Found 1 pods out of 1
Mar  6 02:52:33.885: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e" are running
Mar  6 02:52:33.887: INFO: Pod "my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e-n7mjx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 02:52:28 +0000 UTC Reason: Message:}])
Mar  6 02:52:33.887: INFO: Trying to dial the pod
Mar  6 02:52:38.895: INFO: Controller my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e: Got expected result from replica 1 [my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e-n7mjx]: "my-hostname-basic-29703e58-f167-4229-89c4-828675e6b62e-n7mjx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:52:38.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5712" for this suite.

• [SLOW TEST:10.158 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":30,"skipped":519,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:52:38.902: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-2404
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:52:39.039: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Mar  6 02:52:41.060: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:52:42.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2404" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":31,"skipped":528,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:52:42.072: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9984
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3202b067-4a22-4e48-b3d3-4c930b2a1c8a
STEP: Creating a pod to test consume configMaps
Mar  6 02:52:42.212: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4" in namespace "projected-9984" to be "success or failure"
Mar  6 02:52:42.219: INFO: Pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.202929ms
Mar  6 02:52:44.222: INFO: Pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009666414s
STEP: Saw pod success
Mar  6 02:52:44.222: INFO: Pod "pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4" satisfied condition "success or failure"
Mar  6 02:52:44.223: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4 container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 02:52:44.260: INFO: Waiting for pod pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4 to disappear
Mar  6 02:52:44.262: INFO: Pod pod-projected-configmaps-a1a3e3bd-8d08-41d1-839a-1cfd8ca9a5b4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:52:44.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9984" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":556,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:52:44.269: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-7311
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:52:44.402: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:53:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7311" for this suite.

• [SLOW TEST:59.262 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":33,"skipped":565,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:53:43.531: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename hostpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in hostpath-4388
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Mar  6 02:53:43.713: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4388" to be "success or failure"
Mar  6 02:53:43.726: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.00881ms
Mar  6 02:53:45.728: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015481855s
STEP: Saw pod success
Mar  6 02:53:45.728: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Mar  6 02:53:45.730: INFO: Trying to get logs from node worker02 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Mar  6 02:53:45.743: INFO: Waiting for pod pod-host-path-test to disappear
Mar  6 02:53:45.745: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:53:45.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4388" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":570,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:53:45.752: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-1237
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-4qb6
STEP: Creating a pod to test atomic-volume-subpath
Mar  6 02:53:45.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4qb6" in namespace "subpath-1237" to be "success or failure"
Mar  6 02:53:45.895: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155046ms
Mar  6 02:53:47.898: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.004474667s
Mar  6 02:53:49.900: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.007174477s
Mar  6 02:53:51.904: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.010781542s
Mar  6 02:53:53.906: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.013306742s
Mar  6 02:53:55.911: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.017886111s
Mar  6 02:53:57.915: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.02171346s
Mar  6 02:53:59.918: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.024400054s
Mar  6 02:54:01.920: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.02720213s
Mar  6 02:54:03.926: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.032624592s
Mar  6 02:54:05.928: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.034802806s
Mar  6 02:54:07.932: INFO: Pod "pod-subpath-test-secret-4qb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.03883423s
STEP: Saw pod success
Mar  6 02:54:07.932: INFO: Pod "pod-subpath-test-secret-4qb6" satisfied condition "success or failure"
Mar  6 02:54:07.936: INFO: Trying to get logs from node worker02 pod pod-subpath-test-secret-4qb6 container test-container-subpath-secret-4qb6: 
STEP: delete the pod
Mar  6 02:54:07.969: INFO: Waiting for pod pod-subpath-test-secret-4qb6 to disappear
Mar  6 02:54:07.974: INFO: Pod pod-subpath-test-secret-4qb6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-4qb6
Mar  6 02:54:07.974: INFO: Deleting pod "pod-subpath-test-secret-4qb6" in namespace "subpath-1237"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:54:07.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1237" for this suite.

• [SLOW TEST:22.245 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":35,"skipped":592,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:54:07.998: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-9728
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9728
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar  6 02:54:08.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar  6 02:54:32.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.46:8080/dial?request=hostname&protocol=http&host=10.244.4.13&port=8080&tries=1'] Namespace:pod-network-test-9728 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 02:54:32.233: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 02:54:32.352: INFO: Waiting for responses: map[]
Mar  6 02:54:32.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.46:8080/dial?request=hostname&protocol=http&host=10.244.3.45&port=8080&tries=1'] Namespace:pod-network-test-9728 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 02:54:32.355: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 02:54:32.488: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:54:32.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9728" for this suite.

• [SLOW TEST:24.498 seconds]
[sig-network] Networking
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":657,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:54:32.496: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-9856
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Mar  6 02:54:32.644: INFO: Waiting up to 5m0s for pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527" in namespace "containers-9856" to be "success or failure"
Mar  6 02:54:32.646: INFO: Pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527": Phase="Pending", Reason="", readiness=false. Elapsed: 1.845193ms
Mar  6 02:54:34.652: INFO: Pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007862944s
STEP: Saw pod success
Mar  6 02:54:34.652: INFO: Pod "client-containers-295365a7-6596-445f-ae86-38d6e67b6527" satisfied condition "success or failure"
Mar  6 02:54:34.655: INFO: Trying to get logs from node worker02 pod client-containers-295365a7-6596-445f-ae86-38d6e67b6527 container test-container: 
STEP: delete the pod
Mar  6 02:54:34.712: INFO: Waiting for pod client-containers-295365a7-6596-445f-ae86-38d6e67b6527 to disappear
Mar  6 02:54:34.714: INFO: Pod client-containers-295365a7-6596-445f-ae86-38d6e67b6527 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:54:34.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9856" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":699,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:54:34.732: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-4195
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:54:50.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4195" for this suite.

• [SLOW TEST:16.205 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":38,"skipped":713,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:54:50.937: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8272
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar  6 02:54:51.071: INFO: Waiting up to 5m0s for pod "pod-dda41727-2cda-48e1-b867-13771a35004b" in namespace "emptydir-8272" to be "success or failure"
Mar  6 02:54:51.073: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.872287ms
Mar  6 02:54:53.075: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004246433s
Mar  6 02:54:55.078: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006462419s
STEP: Saw pod success
Mar  6 02:54:55.078: INFO: Pod "pod-dda41727-2cda-48e1-b867-13771a35004b" satisfied condition "success or failure"
Mar  6 02:54:55.079: INFO: Trying to get logs from node worker02 pod pod-dda41727-2cda-48e1-b867-13771a35004b container test-container: 
STEP: delete the pod
Mar  6 02:54:55.092: INFO: Waiting for pod pod-dda41727-2cda-48e1-b867-13771a35004b to disappear
Mar  6 02:54:55.095: INFO: Pod pod-dda41727-2cda-48e1-b867-13771a35004b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:54:55.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8272" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":714,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:54:55.105: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2842
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 02:54:55.726: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 02:54:58.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:54:58.754: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6607-crds.webhook.example.com via the AdmissionRegistration API
Mar  6 02:55:14.299: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:55:24.413: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:55:34.514: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:55:44.620: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:55:54.630: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:55:54.630: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-2842".
STEP: Found 6 events.
Mar  6 02:55:55.149: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {default-scheduler } Scheduled: Successfully assigned webhook-2842/sample-webhook-deployment-5f65f8c764-hh976 to worker02
Mar  6 02:55:55.149: INFO: At 2020-03-06 02:54:55 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 02:55:55.149: INFO: At 2020-03-06 02:54:55 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-hh976
Mar  6 02:55:55.149: INFO: At 2020-03-06 02:54:56 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 02:55:55.149: INFO: At 2020-03-06 02:54:56 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {kubelet worker02} Created: Created container sample-webhook
Mar  6 02:55:55.149: INFO: At 2020-03-06 02:54:56 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-hh976: {kubelet worker02} Started: Started container sample-webhook
Mar  6 02:55:55.151: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 02:55:55.151: INFO: sample-webhook-deployment-5f65f8c764-hh976  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:54:55 +0000 UTC  }]
Mar  6 02:55:55.151: INFO: 
Mar  6 02:55:55.154: INFO: 
Logging node info for node master01
Mar  6 02:55:55.156: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 7194 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:55:55.156: INFO: 
Logging kubelet events for node master01
Mar  6 02:55:55.160: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 02:55:55.170: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.170: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:55:55.170: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:55:55.170: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.170: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:55:55.170: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.170: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 02:55:55.170: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.170: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 02:55:55.170: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.170: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 02:55:55.170: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:55:55.170: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:55:55.170: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 02:55:55.173076      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:55:55.188: INFO: 
Latency metrics for node master01
Mar  6 02:55:55.188: INFO: 
Logging node info for node master02
Mar  6 02:55:55.190: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 7180 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:55:55.190: INFO: 
Logging kubelet events for node master02
Mar  6 02:55:55.194: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 02:55:55.205: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 02:55:55.205: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:55:55.205: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:55:55.205: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:55:55.205: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Container coredns ready: true, restart count 0
Mar  6 02:55:55.205: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:55:55.205: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:55:55.205: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 02:55:55.205: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.205: INFO: 	Container kube-controller-manager ready: true, restart count 1
W0306 02:55:55.210352      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:55:55.229: INFO: 
Latency metrics for node master02
Mar  6 02:55:55.229: INFO: 
Logging node info for node master03
Mar  6 02:55:55.231: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 7181 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:55:55.231: INFO: 
Logging kubelet events for node master03
Mar  6 02:55:55.235: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 02:55:55.245: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:55:55.245: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:55:55.245: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 02:55:55.245: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 02:55:55.245: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:55:55.245: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 02:55:55.245: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container coredns ready: true, restart count 0
Mar  6 02:55:55.245: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 02:55:55.245: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:55:55.245: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:55:55.245: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.245: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 02:55:55.248664      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:55:55.263: INFO: 
Latency metrics for node master03
Mar  6 02:55:55.263: INFO: 
Logging node info for node worker01
Mar  6 02:55:55.265: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 7294 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:55:55.265: INFO: 
Logging kubelet events for node worker01
Mar  6 02:55:55.269: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 02:55:55.280: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:55:55.280: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:55:55.280: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:55:55.280: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 02:55:55.280: INFO: 	Container envoy ready: false, restart count 0
Mar  6 02:55:55.280: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container contour ready: false, restart count 0
Mar  6 02:55:55.280: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container kuard ready: true, restart count 0
Mar  6 02:55:55.280: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container kuard ready: true, restart count 0
Mar  6 02:55:55.280: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container contour ready: false, restart count 0
Mar  6 02:55:55.280: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:55:55.280: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 02:55:55.280: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container contour ready: false, restart count 0
Mar  6 02:55:55.280: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 02:55:55.280: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.280: INFO: 	Container kuard ready: true, restart count 0
W0306 02:55:55.282484      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:55:55.301: INFO: 
Latency metrics for node worker01
Mar  6 02:55:55.301: INFO: 
Logging node info for node worker02
Mar  6 02:55:55.303: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 7669 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:55:55.303: INFO: 
Logging kubelet events for node worker02
Mar  6 02:55:55.307: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 02:55:55.311: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:55:55.311: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:55:55.311: INFO: sample-webhook-deployment-5f65f8c764-hh976 started at 2020-03-06 02:54:55 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 02:55:55.311: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:55:55.311: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:55:55.311: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 02:55:55.311: INFO: 	Container envoy ready: false, restart count 0
Mar  6 02:55:55.311: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Container e2e ready: true, restart count 0
Mar  6 02:55:55.311: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:55:55.311: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 02:55:55.311: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:55:55.311: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 02:55:55.316605      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:55:55.337: INFO: 
Latency metrics for node worker02
Mar  6 02:55:55.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2842" for this suite.
STEP: Destroying namespace "webhook-2842-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [60.324 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 02:55:54.630: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":39,"skipped":723,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:55:55.429: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-3368
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-krjn
STEP: Creating a pod to test atomic-volume-subpath
Mar  6 02:55:55.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-krjn" in namespace "subpath-3368" to be "success or failure"
Mar  6 02:55:55.598: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.546284ms
Mar  6 02:55:57.600: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 2.006362591s
Mar  6 02:55:59.603: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 4.008996699s
Mar  6 02:56:01.605: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 6.011343078s
Mar  6 02:56:03.610: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 8.015840701s
Mar  6 02:56:05.613: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 10.018818621s
Mar  6 02:56:07.617: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 12.023012986s
Mar  6 02:56:09.620: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 14.02574638s
Mar  6 02:56:11.622: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 16.028159563s
Mar  6 02:56:13.624: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 18.030421715s
Mar  6 02:56:15.627: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Running", Reason="", readiness=true. Elapsed: 20.033125167s
Mar  6 02:56:17.630: INFO: Pod "pod-subpath-test-downwardapi-krjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.035580817s
STEP: Saw pod success
Mar  6 02:56:17.630: INFO: Pod "pod-subpath-test-downwardapi-krjn" satisfied condition "success or failure"
Mar  6 02:56:17.632: INFO: Trying to get logs from node worker02 pod pod-subpath-test-downwardapi-krjn container test-container-subpath-downwardapi-krjn: 
STEP: delete the pod
Mar  6 02:56:17.647: INFO: Waiting for pod pod-subpath-test-downwardapi-krjn to disappear
Mar  6 02:56:17.648: INFO: Pod pod-subpath-test-downwardapi-krjn no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-krjn
Mar  6 02:56:17.648: INFO: Deleting pod "pod-subpath-test-downwardapi-krjn" in namespace "subpath-3368"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:56:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3368" for this suite.

• [SLOW TEST:22.231 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":40,"skipped":733,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:56:17.660: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6101
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3d6d3157-1cd5-4e9c-a543-1b46e030bfb8
STEP: Creating a pod to test consume configMaps
Mar  6 02:56:17.800: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee" in namespace "projected-6101" to be "success or failure"
Mar  6 02:56:17.804: INFO: Pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198548ms
Mar  6 02:56:19.806: INFO: Pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006455227s
STEP: Saw pod success
Mar  6 02:56:19.806: INFO: Pod "pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee" satisfied condition "success or failure"
Mar  6 02:56:19.808: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 02:56:19.822: INFO: Waiting for pod pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee to disappear
Mar  6 02:56:19.829: INFO: Pod pod-projected-configmaps-2a168c44-b737-4aa3-8346-6608204dbeee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:56:19.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6101" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":739,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:56:19.835: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6036
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:56:19.966: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b" in namespace "security-context-test-6036" to be "success or failure"
Mar  6 02:56:19.968: INFO: Pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.891679ms
Mar  6 02:56:21.970: INFO: Pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0043452s
Mar  6 02:56:21.970: INFO: Pod "busybox-readonly-false-f89c3abc-6299-43a4-815a-8b415a29629b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:56:21.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6036" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":739,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:56:21.977: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8337
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 02:56:22.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6" in namespace "downward-api-8337" to be "success or failure"
Mar  6 02:56:22.125: INFO: Pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984432ms
Mar  6 02:56:24.127: INFO: Pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005300546s
STEP: Saw pod success
Mar  6 02:56:24.127: INFO: Pod "downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6" satisfied condition "success or failure"
Mar  6 02:56:24.129: INFO: Trying to get logs from node worker02 pod downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6 container client-container: 
STEP: delete the pod
Mar  6 02:56:24.143: INFO: Waiting for pod downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6 to disappear
Mar  6 02:56:24.144: INFO: Pod downwardapi-volume-3dbec8bf-2c79-473f-8ae8-dcd0b31493f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:56:24.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8337" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":762,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:56:24.151: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2374
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar  6 02:56:26.804: INFO: Successfully updated pod "labelsupdatefe2f9bd0-61d8-480f-8b33-29239b58093b"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:56:28.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2374" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":764,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:56:28.832: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8121
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Mar  6 02:56:28.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-8121'
Mar  6 02:56:29.186: INFO: stderr: ""
Mar  6 02:56:29.186: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar  6 02:56:30.189: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 02:56:30.189: INFO: Found 0 / 1
Mar  6 02:56:31.189: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 02:56:31.189: INFO: Found 1 / 1
Mar  6 02:56:31.189: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Mar  6 02:56:31.191: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 02:56:31.191: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar  6 02:56:31.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 patch pod agnhost-master-95zcv --namespace=kubectl-8121 -p {"metadata":{"annotations":{"x":"y"}}}'
Mar  6 02:56:31.269: INFO: stderr: ""
Mar  6 02:56:31.269: INFO: stdout: "pod/agnhost-master-95zcv patched\n"
STEP: checking annotations
Mar  6 02:56:31.271: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 02:56:31.271: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:56:31.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8121" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":45,"skipped":851,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:56:31.277: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4758
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 02:56:31.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 02:56:34.912: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
Mar  6 02:56:44.933: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:56:55.045: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:57:05.144: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:57:15.249: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:57:25.259: INFO: Waiting for webhook configuration to be ready...
Mar  6 02:57:25.259: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-4758".
STEP: Found 6 events.
Mar  6 02:57:25.262: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {default-scheduler } Scheduled: Successfully assigned webhook-4758/sample-webhook-deployment-5f65f8c764-d7djs to worker02
Mar  6 02:57:25.262: INFO: At 2020-03-06 02:56:31 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 02:57:25.262: INFO: At 2020-03-06 02:56:31 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d7djs
Mar  6 02:57:25.262: INFO: At 2020-03-06 02:56:32 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 02:57:25.262: INFO: At 2020-03-06 02:56:32 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {kubelet worker02} Created: Created container sample-webhook
Mar  6 02:57:25.262: INFO: At 2020-03-06 02:56:32 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d7djs: {kubelet worker02} Started: Started container sample-webhook
Mar  6 02:57:25.265: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 02:57:25.265: INFO: sample-webhook-deployment-5f65f8c764-d7djs  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 02:56:31 +0000 UTC  }]
Mar  6 02:57:25.265: INFO: 
Mar  6 02:57:25.268: INFO: 
Logging node info for node master01
Mar  6 02:57:25.269: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 7194 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:57:25.270: INFO: 
Logging kubelet events for node master01
Mar  6 02:57:25.274: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 02:57:25.283: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.283: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:57:25.283: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:57:25.283: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.283: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:57:25.283: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.283: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 02:57:25.283: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.283: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 02:57:25.283: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.283: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 02:57:25.283: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:57:25.283: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:57:25.283: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 02:57:25.285756      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:57:25.302: INFO: 
Latency metrics for node master01
Mar  6 02:57:25.302: INFO: 
Logging node info for node master02
Mar  6 02:57:25.311: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 7180 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:57:25.311: INFO: 
Logging kubelet events for node master02
Mar  6 02:57:25.318: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 02:57:25.328: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 02:57:25.328: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 02:57:25.328: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 02:57:25.328: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:57:25.328: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:57:25.328: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:57:25.328: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Container coredns ready: true, restart count 0
Mar  6 02:57:25.328: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:57:25.328: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:57:25.328: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 02:57:25.330411      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:57:25.344: INFO: 
Latency metrics for node master02
Mar  6 02:57:25.345: INFO: 
Logging node info for node master03
Mar  6 02:57:25.346: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 7181 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:53:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:57:25.346: INFO: 
Logging kubelet events for node master03
Mar  6 02:57:25.351: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 02:57:25.360: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:57:25.360: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:57:25.360: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 02:57:25.360: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 02:57:25.360: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 02:57:25.360: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:57:25.360: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 02:57:25.360: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container coredns ready: true, restart count 0
Mar  6 02:57:25.360: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:57:25.360: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:57:25.360: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.360: INFO: 	Container kube-apiserver ready: true, restart count 0
W0306 02:57:25.363325      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:57:25.378: INFO: 
Latency metrics for node master03
Mar  6 02:57:25.378: INFO: 
Logging node info for node worker01
Mar  6 02:57:25.380: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 7294 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:54:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:57:25.380: INFO: 
Logging kubelet events for node worker01
Mar  6 02:57:25.384: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 02:57:25.397: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 02:57:25.397: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container kuard ready: true, restart count 0
Mar  6 02:57:25.397: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:57:25.397: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 02:57:25.397: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container contour ready: false, restart count 0
Mar  6 02:57:25.397: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container contour ready: false, restart count 0
Mar  6 02:57:25.397: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container kuard ready: true, restart count 0
Mar  6 02:57:25.397: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container kuard ready: true, restart count 0
Mar  6 02:57:25.397: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container contour ready: false, restart count 0
Mar  6 02:57:25.397: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:57:25.397: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:57:25.397: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 02:57:25.397: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.397: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 02:57:25.397: INFO: 	Container envoy ready: false, restart count 0
W0306 02:57:25.400010      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:57:25.427: INFO: 
Latency metrics for node worker01
Mar  6 02:57:25.427: INFO: 
Logging node info for node worker02
Mar  6 02:57:25.429: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 7669 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 02:57:25.429: INFO: 
Logging kubelet events for node worker02
Mar  6 02:57:25.432: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 02:57:25.436: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 02:57:25.436: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 02:57:25.436: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 02:57:25.436: INFO: 	Container envoy ready: false, restart count 0
Mar  6 02:57:25.436: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Container e2e ready: true, restart count 0
Mar  6 02:57:25.436: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:57:25.436: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 02:57:25.436: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 02:57:25.436: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 02:57:25.436: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 02:57:25.436: INFO: sample-webhook-deployment-5f65f8c764-d7djs started at 2020-03-06 02:56:31 +0000 UTC (0+1 container statuses recorded)
Mar  6 02:57:25.436: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 02:57:25.438622      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 02:57:25.458: INFO: 
Latency metrics for node worker02
Mar  6 02:57:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4758" for this suite.
STEP: Destroying namespace "webhook-4758-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [54.252 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 02:57:25.259: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":45,"skipped":888,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:57:25.529: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9035
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:57:25.679: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar  6 02:57:33.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 create -f -'
Mar  6 02:57:33.930: INFO: stderr: ""
Mar  6 02:57:33.930: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar  6 02:57:33.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 delete e2e-test-crd-publish-openapi-3692-crds test-cr'
Mar  6 02:57:34.009: INFO: stderr: ""
Mar  6 02:57:34.009: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Mar  6 02:57:34.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 apply -f -'
Mar  6 02:57:34.156: INFO: stderr: ""
Mar  6 02:57:34.156: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar  6 02:57:34.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-9035 delete e2e-test-crd-publish-openapi-3692-crds test-cr'
Mar  6 02:57:34.233: INFO: stderr: ""
Mar  6 02:57:34.233: INFO: stdout: "e2e-test-crd-publish-openapi-3692-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar  6 02:57:34.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-3692-crds'
Mar  6 02:57:34.433: INFO: stderr: ""
Mar  6 02:57:34.433: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3692-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:57:37.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9035" for this suite.

• [SLOW TEST:11.656 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":46,"skipped":890,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:57:37.185: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-5529
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-19319ffa-02ef-48f1-8d9a-835deed2a25a
STEP: Creating a pod to test consume secrets
Mar  6 02:57:37.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b" in namespace "projected-5529" to be "success or failure"
Mar  6 02:57:37.345: INFO: Pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.983062ms
Mar  6 02:57:39.348: INFO: Pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005548591s
STEP: Saw pod success
Mar  6 02:57:39.348: INFO: Pod "pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b" satisfied condition "success or failure"
Mar  6 02:57:39.350: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b container projected-secret-volume-test: 
STEP: delete the pod
Mar  6 02:57:39.363: INFO: Waiting for pod pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b to disappear
Mar  6 02:57:39.366: INFO: Pod pod-projected-secrets-8824f421-1259-44d9-b8c2-b77f999f153b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:57:39.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5529" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":896,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:57:39.372: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename replicaset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-5003
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Mar  6 02:57:42.526: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:57:43.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5003" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":48,"skipped":914,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:57:43.549: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename proxy
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in proxy-411
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 02:57:43.694: INFO: (0) /api/v1/nodes/worker01:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2127
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2127
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2127
I0306 02:57:43.924347      19 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2127, replica count: 2
Mar  6 02:57:46.974: INFO: Creating new exec pod
I0306 02:57:46.974597      19 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar  6 02:57:49.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Mar  6 02:57:50.178: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Mar  6 02:57:50.178: INFO: stdout: ""
Mar  6 02:57:50.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 10.102.171.180 80'
Mar  6 02:57:50.390: INFO: stderr: "+ nc -zv -t -w 2 10.102.171.180 80\nConnection to 10.102.171.180 80 port [tcp/http] succeeded!\n"
Mar  6 02:57:50.390: INFO: stdout: ""
Mar  6 02:57:50.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.250 31433'
Mar  6 02:57:50.598: INFO: stderr: "+ nc -zv -t -w 2 192.168.1.250 31433\nConnection to 192.168.1.250 31433 port [tcp/31433] succeeded!\n"
Mar  6 02:57:50.598: INFO: stdout: ""
Mar  6 02:57:50.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 31433'
Mar  6 02:57:52.825: INFO: rc: 1
Mar  6 02:57:52.825: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 31433:
Command stdout:

stderr:
+ nc -zv -t -w 2 192.168.1.251 31433
nc: connect to 192.168.1.251 port 31433 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Mar  6 02:57:53.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2127 execpodqvhl8 -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 31433'
Mar  6 02:57:54.002: INFO: stderr: "+ nc -zv -t -w 2 192.168.1.251 31433\nConnection to 192.168.1.251 31433 port [tcp/31433] succeeded!\n"
Mar  6 02:57:54.002: INFO: stdout: ""
Mar  6 02:57:54.002: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:57:54.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2127" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:10.309 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":50,"skipped":972,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:57:54.064: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-3452
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Mar  6 02:57:54.223: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 02:58:02.105: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:58:18.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3452" for this suite.

• [SLOW TEST:23.975 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":51,"skipped":974,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:58:18.040: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6861
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-56e2b002-70f7-4243-bc70-64e3ac944e2f
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:58:20.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6861" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":979,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:58:20.219: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4110
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4110/configmap-test-8967a316-569d-4ca0-9717-1f3a2bb68d48
STEP: Creating a pod to test consume configMaps
Mar  6 02:58:20.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a" in namespace "configmap-4110" to be "success or failure"
Mar  6 02:58:20.357: INFO: Pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.791524ms
Mar  6 02:58:22.360: INFO: Pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004570974s
STEP: Saw pod success
Mar  6 02:58:22.360: INFO: Pod "pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a" satisfied condition "success or failure"
Mar  6 02:58:22.362: INFO: Trying to get logs from node worker02 pod pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a container env-test: 
STEP: delete the pod
Mar  6 02:58:22.376: INFO: Waiting for pod pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a to disappear
Mar  6 02:58:22.378: INFO: Pod pod-configmaps-16b7814e-a77b-4cf1-b5ae-2bf56809e46a no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:58:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4110" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":1014,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:58:22.385: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4430
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-d617537b-617b-481f-896f-5f75338c9316
STEP: Creating a pod to test consume configMaps
Mar  6 02:58:22.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977" in namespace "configmap-4430" to be "success or failure"
Mar  6 02:58:22.539: INFO: Pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340189ms
Mar  6 02:58:24.541: INFO: Pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004793484s
STEP: Saw pod success
Mar  6 02:58:24.541: INFO: Pod "pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977" satisfied condition "success or failure"
Mar  6 02:58:24.543: INFO: Trying to get logs from node worker02 pod pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977 container configmap-volume-test: 
STEP: delete the pod
Mar  6 02:58:24.557: INFO: Waiting for pod pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977 to disappear
Mar  6 02:58:24.560: INFO: Pod pod-configmaps-075c27e1-cf03-44ec-9306-e4131c494977 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:58:24.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4430" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1034,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:58:24.567: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3206
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar  6 02:58:24.709: INFO: Waiting up to 5m0s for pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a" in namespace "downward-api-3206" to be "success or failure"
Mar  6 02:58:24.721: INFO: Pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.862051ms
Mar  6 02:58:26.723: INFO: Pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013985369s
STEP: Saw pod success
Mar  6 02:58:26.723: INFO: Pod "downward-api-e11faed6-8187-4330-a238-7a28e4d0204a" satisfied condition "success or failure"
Mar  6 02:58:26.726: INFO: Trying to get logs from node worker02 pod downward-api-e11faed6-8187-4330-a238-7a28e4d0204a container dapi-container: 
STEP: delete the pod
Mar  6 02:58:26.740: INFO: Waiting for pod downward-api-e11faed6-8187-4330-a238-7a28e4d0204a to disappear
Mar  6 02:58:26.742: INFO: Pod downward-api-e11faed6-8187-4330-a238-7a28e4d0204a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 02:58:26.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3206" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1078,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 02:58:26.749: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1621
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 in namespace container-probe-1621
Mar  6 02:58:28.890: INFO: Started pod liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 in namespace container-probe-1621
STEP: checking the pod's current state and verifying that restartCount is present
Mar  6 02:58:28.892: INFO: Initial restart count of pod liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is 0
Mar  6 02:58:42.916: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 1 (14.023778808s elapsed)
Mar  6 02:59:00.938: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 2 (32.046591523s elapsed)
Mar  6 02:59:20.964: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 3 (52.071703627s elapsed)
Mar  6 02:59:42.992: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 4 (1m14.099871943s elapsed)
Mar  6 03:00:47.084: INFO: Restart count of pod container-probe-1621/liveness-c1ea9a92-f2c3-4e28-9a71-0c5019eec319 is now 5 (2m18.192333706s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:00:47.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1621" for this suite.

• [SLOW TEST:140.351 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1094,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:00:47.100: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-9911
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar  6 03:00:49.253: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:00:49.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9911" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1110,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:00:49.271: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename svcaccounts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-494
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Mar  6 03:00:49.920: INFO: created pod pod-service-account-defaultsa
Mar  6 03:00:49.920: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Mar  6 03:00:49.923: INFO: created pod pod-service-account-mountsa
Mar  6 03:00:49.923: INFO: pod pod-service-account-mountsa service account token volume mount: true
Mar  6 03:00:49.929: INFO: created pod pod-service-account-nomountsa
Mar  6 03:00:49.929: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Mar  6 03:00:49.934: INFO: created pod pod-service-account-defaultsa-mountspec
Mar  6 03:00:49.934: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Mar  6 03:00:49.941: INFO: created pod pod-service-account-mountsa-mountspec
Mar  6 03:00:49.941: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Mar  6 03:00:49.945: INFO: created pod pod-service-account-nomountsa-mountspec
Mar  6 03:00:49.945: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Mar  6 03:00:49.950: INFO: created pod pod-service-account-defaultsa-nomountspec
Mar  6 03:00:49.950: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Mar  6 03:00:49.957: INFO: created pod pod-service-account-mountsa-nomountspec
Mar  6 03:00:49.957: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Mar  6 03:00:49.969: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar  6 03:00:49.969: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:00:49.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-494" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":58,"skipped":1118,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:00:49.986: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-4813
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:00:50.410: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:00:52.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:00:54.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060450, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:00:57.452: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
Mar  6 03:01:07.479: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:01:17.589: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:01:27.689: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:01:37.790: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:01:47.799: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:01:47.799: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-4813".
STEP: Found 6 events.
Mar  6 03:01:47.804: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {default-scheduler } Scheduled: Successfully assigned webhook-4813/sample-webhook-deployment-5f65f8c764-vkdnx to worker02
Mar  6 03:01:47.804: INFO: At 2020-03-06 03:00:50 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:01:47.804: INFO: At 2020-03-06 03:00:50 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-vkdnx
Mar  6 03:01:47.804: INFO: At 2020-03-06 03:00:52 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:01:47.804: INFO: At 2020-03-06 03:00:52 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:01:47.804: INFO: At 2020-03-06 03:00:52 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-vkdnx: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:01:47.807: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:01:47.807: INFO: sample-webhook-deployment-5f65f8c764-vkdnx  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:00:50 +0000 UTC  }]
Mar  6 03:01:47.807: INFO: 
Mar  6 03:01:47.812: INFO: 
Logging node info for node master01
Mar  6 03:01:47.817: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 8971 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:58:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:01:47.817: INFO: 
Logging kubelet events for node master01
Mar  6 03:01:47.822: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:01:47.841: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.841: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:01:47.841: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.841: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:01:47.841: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:01:47.841: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:01:47.841: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:01:47.841: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:47.841: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:01:47.841: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:01:47.841: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.841: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:01:47.841: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.841: INFO: 	Container kube-apiserver ready: true, restart count 0
W0306 03:01:47.846403      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:01:47.861: INFO: 
Latency metrics for node master01
Mar  6 03:01:47.861: INFO: 
Logging node info for node master02
Mar  6 03:01:47.864: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 8958 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:01:47.864: INFO: 
Logging kubelet events for node master02
Mar  6 03:01:47.868: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:01:47.878: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:01:47.878: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:01:47.878: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:01:47.878: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:01:47.878: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:01:47.878: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:01:47.878: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:01:47.878: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:01:47.878: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:01:47.878: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:01:47.881083      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:01:47.902: INFO: 
Latency metrics for node master02
Mar  6 03:01:47.902: INFO: 
Logging node info for node master03
Mar  6 03:01:47.903: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 8959 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 02:58:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:01:47.903: INFO: 
Logging kubelet events for node master03
Mar  6 03:01:47.909: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:01:47.922: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:01:47.922: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:01:47.922: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:01:47.922: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:01:47.922: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:01:47.922: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:01:47.922: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:01:47.922: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:01:47.922: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:01:47.922: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:01:47.922: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.922: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:01:47.925426      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:01:47.944: INFO: 
Latency metrics for node master03
Mar  6 03:01:47.944: INFO: 
Logging node info for node worker01
Mar  6 03:01:47.946: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 9595 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:01:47.946: INFO: 
Logging kubelet events for node worker01
Mar  6 03:01:47.950: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:01:47.960: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:01:47.960: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:01:47.960: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:01:47.960: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:01:47.960: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:01:47.960: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:01:47.960: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:01:47.960: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:01:47.960: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:01:47.960: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:01:47.960: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:01:47.960: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:01:47.960: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:01:47.960: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:47.960: INFO: 	Container kuard ready: true, restart count 0
W0306 03:01:47.963566      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:01:47.983: INFO: 
Latency metrics for node worker01
Mar  6 03:01:47.983: INFO: 
Logging node info for node worker02
Mar  6 03:01:47.985: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 9230 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:00:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:01:47.985: INFO: 
Logging kubelet events for node worker02
Mar  6 03:01:47.989: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:01:48.001: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:01:48.001: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:01:48.001: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:01:48.001: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:01:48.001: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:01:48.001: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:01:48.001: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:01:48.001: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:01:48.001: INFO: sample-webhook-deployment-5f65f8c764-vkdnx started at 2020-03-06 03:00:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 03:01:48.001: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:01:48.001: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:01:48.001: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 03:01:48.004119      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:01:48.024: INFO: 
Latency metrics for node worker02
Mar  6 03:01:48.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4813" for this suite.
STEP: Destroying namespace "webhook-4813-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [58.106 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:01:47.799: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2096
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":58,"skipped":1120,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:01:48.092: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-2293
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Mar  6 03:01:48.262: INFO: Waiting up to 5m0s for pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5" in namespace "var-expansion-2293" to be "success or failure"
Mar  6 03:01:48.264: INFO: Pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.934874ms
Mar  6 03:01:50.268: INFO: Pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005626655s
STEP: Saw pod success
Mar  6 03:01:50.268: INFO: Pod "var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5" satisfied condition "success or failure"
Mar  6 03:01:50.269: INFO: Trying to get logs from node worker02 pod var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5 container dapi-container: 
STEP: delete the pod
Mar  6 03:01:50.288: INFO: Waiting for pod var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5 to disappear
Mar  6 03:01:50.289: INFO: Pod var-expansion-35b166b4-0f87-4f61-aec5-940213d5a5d5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:01:50.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2293" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1128,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:01:50.299: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-4672
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:01:50.441: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar  6 03:02:05.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 create -f -'
Mar  6 03:02:15.970: INFO: stderr: ""
Mar  6 03:02:15.970: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Mar  6 03:02:15.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 delete e2e-test-crd-publish-openapi-563-crds test-cr'
Mar  6 03:02:31.088: INFO: stderr: ""
Mar  6 03:02:31.088: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Mar  6 03:02:31.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 apply -f -'
Mar  6 03:02:36.324: INFO: stderr: ""
Mar  6 03:02:36.324: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Mar  6 03:02:36.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-4672 delete e2e-test-crd-publish-openapi-563-crds test-cr'
Mar  6 03:02:51.404: INFO: stderr: ""
Mar  6 03:02:51.404: INFO: stdout: "e2e-test-crd-publish-openapi-563-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar  6 03:02:51.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-563-crds'
Mar  6 03:03:06.566: INFO: stderr: ""
Mar  6 03:03:06.566: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-563-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:03:20.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4672" for this suite.

• [SLOW TEST:90.571 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":60,"skipped":1150,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:03:20.870: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8913
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8913.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8913.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8913.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8913.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8913.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8913.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:03:37.038: INFO: DNS probes using dns-8913/dns-test-b6c7cf0f-6f0a-445f-a237-eaef470edaf5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:03:37.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8913" for this suite.

• [SLOW TEST:16.187 seconds]
[sig-network] DNS
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":61,"skipped":1186,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:03:37.058: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-718
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-5af059d3-5a1b-4f1e-9d82-8fa7e9d5a8d6
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5af059d3-5a1b-4f1e-9d82-8fa7e9d5a8d6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:03:41.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-718" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1203,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:03:41.270: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1660
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-75b9cdfc-af69-44a5-8bdc-94c195db1c97
STEP: Creating a pod to test consume configMaps
Mar  6 03:03:41.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5" in namespace "projected-1660" to be "success or failure"
Mar  6 03:03:41.419: INFO: Pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029205ms
Mar  6 03:03:43.421: INFO: Pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004452255s
STEP: Saw pod success
Mar  6 03:03:43.421: INFO: Pod "pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5" satisfied condition "success or failure"
Mar  6 03:03:43.423: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5 container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 03:03:43.438: INFO: Waiting for pod pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5 to disappear
Mar  6 03:03:43.440: INFO: Pod pod-projected-configmaps-0fe6ae59-8536-4cea-a2d9-5f0e297267d5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:03:43.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1660" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1277,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:03:43.446: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7586
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:03:44.350: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:03:46.357: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719060624, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:03:49.372: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Mar  6 03:03:59.390: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:04:09.499: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:04:19.599: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:04:29.701: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:04:39.712: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:04:39.712: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-7586".
STEP: Found 6 events.
Mar  6 03:04:39.715: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {default-scheduler } Scheduled: Successfully assigned webhook-7586/sample-webhook-deployment-5f65f8c764-d65fz to worker02
Mar  6 03:04:39.715: INFO: At 2020-03-06 03:03:44 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:04:39.715: INFO: At 2020-03-06 03:03:44 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d65fz
Mar  6 03:04:39.715: INFO: At 2020-03-06 03:03:45 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:04:39.715: INFO: At 2020-03-06 03:03:45 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:04:39.715: INFO: At 2020-03-06 03:03:45 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d65fz: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:04:39.717: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:04:39.717: INFO: sample-webhook-deployment-5f65f8c764-d65fz  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:03:44 +0000 UTC  }]
Mar  6 03:04:39.717: INFO: 
Mar  6 03:04:39.721: INFO: 
Logging node info for node master01
Mar  6 03:04:39.722: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 10316 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:58 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:04:39.723: INFO: 
Logging kubelet events for node master01
Mar  6 03:04:39.726: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:04:39.737: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.737: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:04:39.737: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:04:39.737: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.737: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:04:39.737: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.737: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:04:39.737: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.737: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:04:39.737: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.737: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:04:39.737: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:04:39.737: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:04:39.737: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:04:39.740027      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:04:39.753: INFO: 
Latency metrics for node master01
Mar  6 03:04:39.753: INFO: 
Logging node info for node master02
Mar  6 03:04:39.755: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 10302 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:04:39.755: INFO: 
Logging kubelet events for node master02
Mar  6 03:04:39.759: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:04:39.768: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:04:39.768: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:04:39.768: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:04:39.768: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:04:39.768: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:04:39.768: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:04:39.768: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:04:39.768: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.768: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:04:39.768: INFO: 	Container kube-flannel ready: true, restart count 0
W0306 03:04:39.771331      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:04:39.787: INFO: 
Latency metrics for node master02
Mar  6 03:04:39.787: INFO: 
Logging node info for node master03
Mar  6 03:04:39.789: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 10303 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:04:39.789: INFO: 
Logging kubelet events for node master03
Mar  6 03:04:39.793: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:04:39.803: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:04:39.803: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:04:39.803: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:04:39.803: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:04:39.803: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:04:39.803: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:04:39.803: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:04:39.803: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:04:39.803: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:04:39.803: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:04:39.803: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.803: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:04:39.806149      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:04:39.825: INFO: 
Latency metrics for node master03
Mar  6 03:04:39.825: INFO: 
Logging node info for node worker01
Mar  6 03:04:39.827: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 9595 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:01:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:04:39.827: INFO: 
Logging kubelet events for node worker01
Mar  6 03:04:39.831: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:04:39.843: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:04:39.843: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:04:39.843: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:04:39.843: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:04:39.843: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:04:39.843: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:04:39.843: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:04:39.843: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:04:39.843: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:04:39.843: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:04:39.843: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:04:39.843: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:04:39.843: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:04:39.843: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.843: INFO: 	Container kuard ready: true, restart count 0
W0306 03:04:39.846300      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:04:39.862: INFO: 
Latency metrics for node worker01
Mar  6 03:04:39.862: INFO: 
Logging node info for node worker02
Mar  6 03:04:39.864: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 10264 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:03:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:04:39.864: INFO: 
Logging kubelet events for node worker02
Mar  6 03:04:39.868: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:04:39.872: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:04:39.872: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:04:39.872: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:04:39.872: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:04:39.872: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:04:39.872: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:04:39.872: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:04:39.872: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:04:39.872: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:04:39.872: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:04:39.872: INFO: sample-webhook-deployment-5f65f8c764-d65fz started at 2020-03-06 03:03:44 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:04:39.872: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 03:04:39.875285      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:04:39.906: INFO: 
Latency metrics for node worker02
Mar  6 03:04:39.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7586" for this suite.
STEP: Destroying namespace "webhook-7586-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [56.522 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:04:39.712: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1303
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":63,"skipped":1279,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:04:39.968: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-6301
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:04:56.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6301" for this suite.

• [SLOW TEST:16.229 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":64,"skipped":1306,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:04:56.197: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1845
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-0c5dc7b9-5a24-4918-b926-1ceda7574db4 in namespace container-probe-1845
Mar  6 03:04:58.387: INFO: Started pod test-webserver-0c5dc7b9-5a24-4918-b926-1ceda7574db4 in namespace container-probe-1845
STEP: checking the pod's current state and verifying that restartCount is present
Mar  6 03:04:58.389: INFO: Initial restart count of pod test-webserver-0c5dc7b9-5a24-4918-b926-1ceda7574db4 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:08:58.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1845" for this suite.

• [SLOW TEST:242.548 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1320,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:08:58.745: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8653
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 03:08:58.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8653'
Mar  6 03:09:03.969: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar  6 03:09:03.969: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602
Mar  6 03:09:05.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete deployment e2e-test-httpd-deployment --namespace=kubectl-8653'
Mar  6 03:09:21.060: INFO: stderr: ""
Mar  6 03:09:21.060: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:21.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8653" for this suite.

• [SLOW TEST:22.323 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1590
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":66,"skipped":1384,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:21.068: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1351
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-bcd6bd70-65b1-4e89-b2c6-9900ba16fbfd
STEP: Creating a pod to test consume configMaps
Mar  6 03:09:21.211: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b" in namespace "projected-1351" to be "success or failure"
Mar  6 03:09:21.213: INFO: Pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371444ms
Mar  6 03:09:23.216: INFO: Pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005258599s
STEP: Saw pod success
Mar  6 03:09:23.216: INFO: Pod "pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b" satisfied condition "success or failure"
Mar  6 03:09:23.219: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 03:09:23.238: INFO: Waiting for pod pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b to disappear
Mar  6 03:09:23.240: INFO: Pod pod-projected-configmaps-ca070575-167b-4031-814b-cd124e89c10b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:23.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1351" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1394,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:23.249: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9953
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar  6 03:09:23.385: INFO: Waiting up to 5m0s for pod "pod-278da763-c75e-417d-b575-cb3276a07b47" in namespace "emptydir-9953" to be "success or failure"
Mar  6 03:09:23.387: INFO: Pod "pod-278da763-c75e-417d-b575-cb3276a07b47": Phase="Pending", Reason="", readiness=false. Elapsed: 1.805488ms
Mar  6 03:09:25.390: INFO: Pod "pod-278da763-c75e-417d-b575-cb3276a07b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00502025s
STEP: Saw pod success
Mar  6 03:09:25.390: INFO: Pod "pod-278da763-c75e-417d-b575-cb3276a07b47" satisfied condition "success or failure"
Mar  6 03:09:25.392: INFO: Trying to get logs from node worker02 pod pod-278da763-c75e-417d-b575-cb3276a07b47 container test-container: 
STEP: delete the pod
Mar  6 03:09:25.411: INFO: Waiting for pod pod-278da763-c75e-417d-b575-cb3276a07b47 to disappear
Mar  6 03:09:25.413: INFO: Pod pod-278da763-c75e-417d-b575-cb3276a07b47 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:25.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9953" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1406,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:25.424: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-5611
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5611.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5611.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:09:27.577: INFO: DNS probes using dns-test-a98920a4-b718-44c7-bc2d-8cd0482cf687 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5611.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5611.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:09:29.621: INFO: DNS probes using dns-test-6efb8304-3eac-44f0-87f6-206056379368 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5611.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5611.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5611.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:09:31.730: INFO: DNS probes using dns-test-2487b07c-a4d6-49c1-8210-f360ba516a76 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:31.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5611" for this suite.

• [SLOW TEST:6.348 seconds]
[sig-network] DNS
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":69,"skipped":1420,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:31.773: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1189
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:09:49.940: INFO: Container started at 2020-03-06 03:09:32 +0000 UTC, pod became ready at 2020-03-06 03:09:48 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:49.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1189" for this suite.

• [SLOW TEST:18.174 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1422,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:49.947: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-5136
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:09:50.098: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f6019465-8be0-4be4-996d-73b89ad09d94", Controller:(*bool)(0xc00541481a), BlockOwnerDeletion:(*bool)(0xc00541481b)}}
Mar  6 03:09:50.103: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b9e7854e-ef5e-457f-907f-a0c5ad02d48e", Controller:(*bool)(0xc00543af36), BlockOwnerDeletion:(*bool)(0xc00543af37)}}
Mar  6 03:09:50.111: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"00a9a301-1c85-445b-aaaa-b55f1b1cba3f", Controller:(*bool)(0xc0054149e6), BlockOwnerDeletion:(*bool)(0xc0054149e7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:55.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5136" for this suite.

• [SLOW TEST:5.181 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":71,"skipped":1454,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:55.128: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8972
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-8b9a1de0-c15e-4d39-b0e6-46dd8f6fffba
STEP: Creating a pod to test consume configMaps
Mar  6 03:09:55.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80" in namespace "projected-8972" to be "success or failure"
Mar  6 03:09:55.266: INFO: Pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80": Phase="Pending", Reason="", readiness=false. Elapsed: 1.708077ms
Mar  6 03:09:57.269: INFO: Pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004295324s
STEP: Saw pod success
Mar  6 03:09:57.269: INFO: Pod "pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80" satisfied condition "success or failure"
Mar  6 03:09:57.270: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80 container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 03:09:57.285: INFO: Waiting for pod pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80 to disappear
Mar  6 03:09:57.286: INFO: Pod pod-projected-configmaps-8828e7b2-5648-454a-8117-fee67b1efb80 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:09:57.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8972" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1455,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:09:57.293: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-9322
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Mar  6 03:09:57.432: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:11.268: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:10:52.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9322" for this suite.

• [SLOW TEST:55.455 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":73,"skipped":1460,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:10:52.748: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in e2e-kubelet-etc-hosts-4633
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Mar  6 03:10:56.898: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:56.898: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.015: INFO: Exec stderr: ""
Mar  6 03:10:57.015: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.015: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.151: INFO: Exec stderr: ""
Mar  6 03:10:57.151: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.151: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.281: INFO: Exec stderr: ""
Mar  6 03:10:57.281: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.281: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.418: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Mar  6 03:10:57.418: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.418: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.545: INFO: Exec stderr: ""
Mar  6 03:10:57.545: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.545: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.679: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Mar  6 03:10:57.679: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.679: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.802: INFO: Exec stderr: ""
Mar  6 03:10:57.802: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.802: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:57.952: INFO: Exec stderr: ""
Mar  6 03:10:57.952: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:57.952: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:58.082: INFO: Exec stderr: ""
Mar  6 03:10:58.082: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:10:58.082: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:10:58.214: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:10:58.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4633" for this suite.

• [SLOW TEST:5.474 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1468,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:10:58.222: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-8310
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Mar  6 03:10:58.351: INFO: namespace kubectl-8310
Mar  6 03:10:58.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-8310'
Mar  6 03:11:03.547: INFO: stderr: ""
Mar  6 03:11:03.547: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar  6 03:11:04.550: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 03:11:04.550: INFO: Found 0 / 1
Mar  6 03:11:05.550: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 03:11:05.550: INFO: Found 1 / 1
Mar  6 03:11:05.550: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar  6 03:11:05.552: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 03:11:05.552: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar  6 03:11:05.552: INFO: wait on agnhost-master startup in kubectl-8310 
Mar  6 03:11:05.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs agnhost-master-n6v7c agnhost-master --namespace=kubectl-8310'
Mar  6 03:11:15.639: INFO: stderr: ""
Mar  6 03:11:15.639: INFO: stdout: "Paused\n"
STEP: exposing RC
Mar  6 03:11:15.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8310'
Mar  6 03:11:35.746: INFO: stderr: ""
Mar  6 03:11:35.746: INFO: stdout: "service/rm2 exposed\n"
Mar  6 03:11:35.749: INFO: Service rm2 in namespace kubectl-8310 found.
STEP: exposing service
Mar  6 03:11:37.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8310'
Mar  6 03:11:57.860: INFO: stderr: ""
Mar  6 03:11:57.860: INFO: stdout: "service/rm3 exposed\n"
Mar  6 03:11:57.865: INFO: Service rm3 in namespace kubectl-8310 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:11:59.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8310" for this suite.

• [SLOW TEST:61.658 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":75,"skipped":1475,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:11:59.880: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-1827
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:12:00.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910" in namespace "projected-1827" to be "success or failure"
Mar  6 03:12:00.027: INFO: Pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322694ms
Mar  6 03:12:02.030: INFO: Pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006909286s
STEP: Saw pod success
Mar  6 03:12:02.030: INFO: Pod "downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910" satisfied condition "success or failure"
Mar  6 03:12:02.032: INFO: Trying to get logs from node worker02 pod downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910 container client-container: 
STEP: delete the pod
Mar  6 03:12:02.049: INFO: Waiting for pod downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910 to disappear
Mar  6 03:12:02.051: INFO: Pod downwardapi-volume-99ebdfa5-f72f-41f5-9f13-aa9850b36910 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:12:02.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1827" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1476,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:12:02.059: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8731
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar  6 03:12:02.197: INFO: Waiting up to 5m0s for pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43" in namespace "downward-api-8731" to be "success or failure"
Mar  6 03:12:02.200: INFO: Pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319602ms
Mar  6 03:12:04.203: INFO: Pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005819907s
STEP: Saw pod success
Mar  6 03:12:04.203: INFO: Pod "downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43" satisfied condition "success or failure"
Mar  6 03:12:04.207: INFO: Trying to get logs from node worker02 pod downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43 container dapi-container: 
STEP: delete the pod
Mar  6 03:12:04.225: INFO: Waiting for pod downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43 to disappear
Mar  6 03:12:04.227: INFO: Pod downward-api-eef76e16-cc20-4726-a72c-de1b7a21af43 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:12:04.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8731" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1476,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:12:04.234: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-2028
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Mar  6 03:12:04.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 cluster-info'
Mar  6 03:12:19.456: INFO: stderr: ""
Mar  6 03:12:19.456: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\x1b[0;32mMetrics-server\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:12:19.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2028" for this suite.

• [SLOW TEST:15.230 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl cluster-info
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1128
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":78,"skipped":1492,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:12:19.465: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-1237
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:12:19.597: INFO: Creating deployment "test-recreate-deployment"
Mar  6 03:12:19.599: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Mar  6 03:12:19.609: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Mar  6 03:12:21.617: INFO: Waiting deployment "test-recreate-deployment" to complete
Mar  6 03:12:21.620: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Mar  6 03:12:21.625: INFO: Updating deployment test-recreate-deployment
Mar  6 03:12:21.625: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar  6 03:12:21.682: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1237 /apis/apps/v1/namespaces/deployment-1237/deployments/test-recreate-deployment 41d8c8b2-781d-489f-84f8-d3e22eb94ae3 12545 2 2020-03-06 03:12:19 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031ba108  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-06 03:12:21 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-06 03:12:21 +0000 UTC,LastTransitionTime:2020-03-06 03:12:19 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Mar  6 03:12:21.684: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-1237 /apis/apps/v1/namespaces/deployment-1237/replicasets/test-recreate-deployment-5f94c574ff dd501a14-4caf-4276-b773-8495c7dd986e 12544 1 2020-03-06 03:12:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 41d8c8b2-781d-489f-84f8-d3e22eb94ae3 0xc002f65bc7 0xc002f65bc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f65c28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:12:21.684: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Mar  6 03:12:21.684: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-1237 /apis/apps/v1/namespaces/deployment-1237/replicasets/test-recreate-deployment-799c574856 63f28a5a-1b51-475c-bb3f-f06bb0f5239f 12534 2 2020-03-06 03:12:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 41d8c8b2-781d-489f-84f8-d3e22eb94ae3 0xc002f65c97 0xc002f65c98}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f65d08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:12:21.690: INFO: Pod "test-recreate-deployment-5f94c574ff-thfw7" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-thfw7 test-recreate-deployment-5f94c574ff- deployment-1237 /api/v1/namespaces/deployment-1237/pods/test-recreate-deployment-5f94c574ff-thfw7 4863d25f-2248-43d5-8978-9e0b2b98f006 12546 0 2020-03-06 03:12:21 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff dd501a14-4caf-4276-b773-8495c7dd986e 0xc0031ba587 0xc0031ba588}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dc9j4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dc9j4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dc9j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:12:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:,StartTime:2020-03-06 03:12:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:12:21.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1237" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":79,"skipped":1500,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:12:21.698: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-1905
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:12:22.798: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:12:25.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:12:25.824: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Registering the custom resource webhook via the AdmissionRegistration API
Mar  6 03:12:30.879: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:12:40.989: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:12:51.088: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:13:01.191: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:13:11.203: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:13:11.203: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-1905".
STEP: Found 6 events.
Mar  6 03:13:11.715: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {default-scheduler } Scheduled: Successfully assigned webhook-1905/sample-webhook-deployment-5f65f8c764-2skpg to worker02
Mar  6 03:13:11.715: INFO: At 2020-03-06 03:12:22 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:13:11.715: INFO: At 2020-03-06 03:12:22 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-2skpg
Mar  6 03:13:11.715: INFO: At 2020-03-06 03:12:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:13:11.715: INFO: At 2020-03-06 03:12:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:13:11.715: INFO: At 2020-03-06 03:12:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2skpg: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:13:11.720: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:13:11.720: INFO: sample-webhook-deployment-5f65f8c764-2skpg  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:12:22 +0000 UTC  }]
Mar  6 03:13:11.720: INFO: 
Mar  6 03:13:11.723: INFO: 
Logging node info for node master01
Mar  6 03:13:11.725: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 11359 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:13:11.725: INFO: 
Logging kubelet events for node master01
Mar  6 03:13:11.729: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:13:11.739: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:13:11.739: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:13:11.739: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:13:11.739: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.739: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:13:11.739: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:13:11.739: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.739: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:13:11.739: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.739: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:13:11.739: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.739: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:13:11.739: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.739: INFO: 	Container kube-scheduler ready: true, restart count 1
W0306 03:13:11.742353      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:13:11.760: INFO: 
Latency metrics for node master01
Mar  6 03:13:11.760: INFO: 
Logging node info for node master02
Mar  6 03:13:11.762: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 11338 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:13:11.762: INFO: 
Logging kubelet events for node master02
Mar  6 03:13:11.766: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:13:11.776: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:13:11.776: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:13:11.776: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:13:11.776: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:13:11.776: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:13:11.776: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:13:11.776: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:13:11.776: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:13:11.776: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:13:11.776: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:13:11.779136      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:13:11.811: INFO: 
Latency metrics for node master02
Mar  6 03:13:11.811: INFO: 
Logging node info for node master03
Mar  6 03:13:11.815: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 11340 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:13:11.815: INFO: 
Logging kubelet events for node master03
Mar  6 03:13:11.820: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:13:11.833: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:13:11.833: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:13:11.833: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:13:11.833: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:13:11.833: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:13:11.833: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:13:11.833: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:13:11.833: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:13:11.833: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:13:11.833: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:13:11.833: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.833: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:13:11.835874      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:13:11.856: INFO: 
Latency metrics for node master03
Mar  6 03:13:11.856: INFO: 
Logging node info for node worker01
Mar  6 03:13:11.858: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 12224 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:11:21 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:13:11.858: INFO: 
Logging kubelet events for node worker01
Mar  6 03:13:11.864: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:13:11.873: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:13:11.873: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:13:11.873: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:13:11.873: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:13:11.873: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:13:11.873: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:13:11.873: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:13:11.873: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:13:11.873: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:13:11.873: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:13:11.873: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:13:11.873: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:13:11.873: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:13:11.873: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.873: INFO: 	Container metrics-server ready: true, restart count 0
W0306 03:13:11.876614      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:13:11.893: INFO: 
Latency metrics for node worker01
Mar  6 03:13:11.893: INFO: 
Logging node info for node worker02
Mar  6 03:13:11.895: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 11322 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:08:50 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:13:11.896: INFO: 
Logging kubelet events for node worker02
Mar  6 03:13:11.900: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:13:11.905: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:13:11.905: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:13:11.905: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:13:11.905: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:13:11.905: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:13:11.905: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:13:11.905: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:13:11.905: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:13:11.905: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:13:11.905: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:13:11.905: INFO: sample-webhook-deployment-5f65f8c764-2skpg started at 2020-03-06 03:12:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:13:11.905: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 03:13:11.907709      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:13:11.943: INFO: 
Latency metrics for node worker02
Mar  6 03:13:11.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1905" for this suite.
STEP: Destroying namespace "webhook-1905-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [50.322 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:13:11.203: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1788
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":79,"skipped":1512,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:13:12.020: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-8313
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar  6 03:13:12.162: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:13:15.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8313" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":80,"skipped":1565,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:13:15.466: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-7320
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-1aca2271-9fcd-4909-8fc1-4ac3149c3d85
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:13:15.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7320" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":81,"skipped":1575,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:13:15.605: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4664
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar  6 03:13:15.743: INFO: PodSpec: initContainers in spec.initContainers
Mar  6 03:13:59.829: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1b53fd52-9d57-466e-a3cb-a06caf734d5c", GenerateName:"", Namespace:"init-container-4664", SelfLink:"/api/v1/namespaces/init-container-4664/pods/pod-init-1b53fd52-9d57-466e-a3cb-a06caf734d5c", UID:"07e11a80-c959-40fc-bf44-2864ae2d5570", ResourceVersion:"13065", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"743765664"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dl88b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c79880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dl88b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dl88b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dl88b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c00438), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"worker02", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028afc80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c004c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c004e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c004e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c004ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061195, loc:(*time.Location)(0x7db7bc0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.1.251", PodIP:"10.244.3.101", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.101"}}, StartTime:(*v1.Time)(0xc0030c2120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027edd50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027eddc0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://72ba66edb19bab04d4a5117ad0f1ff9c8f541503a6d74743683bd8e2ee54731a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030c2160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030c2140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002c0056f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:13:59.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4664" for this suite.

• [SLOW TEST:44.232 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":82,"skipped":1582,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:13:59.837: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6668
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Mar  6 03:13:59.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=kubectl-6668 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Mar  6 03:14:17.115: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Mar  6 03:14:17.115: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:19.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6668" for this suite.

• [SLOW TEST:19.292 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":83,"skipped":1606,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:19.129: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename prestop
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in prestop-5832
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-5832
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5832
STEP: Deleting pre-stop pod
Mar  6 03:14:28.289: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:28.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5832" for this suite.

• [SLOW TEST:9.173 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":84,"skipped":1624,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:28.302: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-4269
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar  6 03:14:28.438: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  6 03:14:28.446: INFO: Waiting for terminating namespaces to be deleted...
Mar  6 03:14:28.451: INFO: 
Logging pods the kubelet thinks is on node worker01 before test
Mar  6 03:14:28.457: INFO: contour-54748c65f5-jl5wz from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:14:28.457: INFO: metrics-server-78799bf646-xrsnn from kube-system started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:14:28.457: INFO: kube-proxy-kcb8f from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:14:28.457: INFO: contour-certgen-82k46 from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:14:28.457: INFO: contour-54748c65f5-gk5sz from projectcontour started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:14:28.457: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:14:28.457: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:14:28.457: INFO: kube-flannel-ds-amd64-xxhz9 from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:14:28.457: INFO: kuard-678c676f5d-vsn86 from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:14:28.457: INFO: envoy-lvmcb from projectcontour started at 2020-03-06 02:30:45 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:14:28.457: INFO: kuard-678c676f5d-m29b6 from default started at 2020-03-06 02:30:49 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:14:28.457: INFO: kuard-678c676f5d-tzsnn from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.457: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:14:28.457: INFO: 
Logging pods the kubelet thinks is on node worker02 before test
Mar  6 03:14:28.464: INFO: kube-proxy-5xxdb from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:14:28.464: INFO: sonobuoy from sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:14:28.464: INFO: tester from prestop-5832 started at 2020-03-06 03:14:21 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container tester ready: true, restart count 0
Mar  6 03:14:28.464: INFO: kube-flannel-ds-amd64-ztfzf from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:14:28.464: INFO: envoy-wgz76 from projectcontour started at 2020-03-06 02:30:55 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:14:28.464: INFO: sonobuoy-e2e-job-67137ff64ac145d3 from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:14:28.464: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:14:28.464: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:14:28.464: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:14:28.464: INFO: e2e-test-rm-busybox-job-gz74m from kubectl-6668 started at 2020-03-06 03:14:05 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container e2e-test-rm-busybox-job ready: false, restart count 0
Mar  6 03:14:28.464: INFO: server from prestop-5832 started at 2020-03-06 03:14:19 +0000 UTC (1 container statuses recorded)
Mar  6 03:14:28.464: INFO: 	Container server ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node worker01
STEP: verifying the node has the label node worker02
Mar  6 03:14:28.490: INFO: Pod kuard-678c676f5d-m29b6 requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod kuard-678c676f5d-tzsnn requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod kuard-678c676f5d-vsn86 requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod kube-flannel-ds-amd64-xxhz9 requesting resource cpu=100m on Node worker01
Mar  6 03:14:28.490: INFO: Pod kube-flannel-ds-amd64-ztfzf requesting resource cpu=100m on Node worker02
Mar  6 03:14:28.490: INFO: Pod kube-proxy-5xxdb requesting resource cpu=0m on Node worker02
Mar  6 03:14:28.490: INFO: Pod kube-proxy-kcb8f requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod metrics-server-78799bf646-xrsnn requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod server requesting resource cpu=0m on Node worker02
Mar  6 03:14:28.490: INFO: Pod tester requesting resource cpu=0m on Node worker02
Mar  6 03:14:28.490: INFO: Pod contour-54748c65f5-gk5sz requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod contour-54748c65f5-jl5wz requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod contour-certgen-82k46 requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod envoy-lvmcb requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod envoy-wgz76 requesting resource cpu=0m on Node worker02
Mar  6 03:14:28.490: INFO: Pod sonobuoy requesting resource cpu=0m on Node worker02
Mar  6 03:14:28.490: INFO: Pod sonobuoy-e2e-job-67137ff64ac145d3 requesting resource cpu=0m on Node worker02
Mar  6 03:14:28.490: INFO: Pod sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g requesting resource cpu=0m on Node worker01
Mar  6 03:14:28.490: INFO: Pod sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd requesting resource cpu=0m on Node worker02
STEP: Starting Pods to consume most of the cluster CPU.
Mar  6 03:14:28.490: INFO: Creating a pod which consumes cpu=1330m on Node worker01
Mar  6 03:14:28.496: INFO: Creating a pod which consumes cpu=1330m on Node worker02
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b67c92bc1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4269/filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc to worker02]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b8c545cb1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b90220cce], Reason = [Created], Message = [Created container filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc.15f9988b970563e8], Reason = [Started], Message = [Started container filler-pod-cdcc7cba-6b00-4359-9282-43a0866185fc]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988b677d306d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4269/filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c to worker01]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988b8c9659cb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988bbdd81df3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988bc11bafaf], Reason = [Created], Message = [Created container filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c.15f9988bc8cda64b], Reason = [Started], Message = [Started container filler-pod-f9acd092-0533-4e68-a356-3b010f7f400c]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f9988c5725097b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taints that the pod didn't tolerate.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f9988c578ea714], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taints that the pod didn't tolerate.]
STEP: removing the label node off the node worker01
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node worker02
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4269" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:5.254 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":85,"skipped":1652,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:33.556: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2718
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-7002cf1e-8aa0-4b99-87a9-87953e087c52
STEP: Creating secret with name s-test-opt-upd-b806b348-982a-41e6-9838-46ef850c436f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7002cf1e-8aa0-4b99-87a9-87953e087c52
STEP: Updating secret s-test-opt-upd-b806b348-982a-41e6-9838-46ef850c436f
STEP: Creating secret with name s-test-opt-create-858cb526-dd63-4d02-bc09-019e87bc45a4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:37.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2718" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1652,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:37.766: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-8777
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-8777/secret-test-17cf8109-0030-4616-bb39-a1cbaa6d8145
STEP: Creating a pod to test consume secrets
Mar  6 03:14:37.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a" in namespace "secrets-8777" to be "success or failure"
Mar  6 03:14:37.910: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.431302ms
Mar  6 03:14:39.914: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00939711s
Mar  6 03:14:41.917: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012186493s
Mar  6 03:14:43.919: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014367103s
Mar  6 03:14:45.921: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.01666046s
STEP: Saw pod success
Mar  6 03:14:45.921: INFO: Pod "pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a" satisfied condition "success or failure"
Mar  6 03:14:45.924: INFO: Trying to get logs from node worker01 pod pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a container env-test: 
STEP: delete the pod
Mar  6 03:14:45.936: INFO: Waiting for pod pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a to disappear
Mar  6 03:14:45.938: INFO: Pod pod-configmaps-e621dfc1-9b5a-4591-9ac3-1942565f221a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:45.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8777" for this suite.

• [SLOW TEST:8.178 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1698,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:45.945: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1502
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar  6 03:14:48.112: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:48.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1502" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1705,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:48.131: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-336
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:14:48.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159" in namespace "projected-336" to be "success or failure"
Mar  6 03:14:48.268: INFO: Pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223805ms
Mar  6 03:14:50.270: INFO: Pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004742588s
STEP: Saw pod success
Mar  6 03:14:50.270: INFO: Pod "downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159" satisfied condition "success or failure"
Mar  6 03:14:50.273: INFO: Trying to get logs from node worker01 pod downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159 container client-container: 
STEP: delete the pod
Mar  6 03:14:50.290: INFO: Waiting for pod downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159 to disappear
Mar  6 03:14:50.292: INFO: Pod downwardapi-volume-ad81a96d-8879-4230-b7a4-61c096d14159 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:14:50.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-336" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1711,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:14:50.304: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-2567
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2567
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar  6 03:14:50.439: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar  6 03:15:12.493: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.108:8080/dial?request=hostname&protocol=udp&host=10.244.4.23&port=8081&tries=1'] Namespace:pod-network-test-2567 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:15:12.493: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:15:12.603: INFO: Waiting for responses: map[]
Mar  6 03:15:12.605: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.108:8080/dial?request=hostname&protocol=udp&host=10.244.3.107&port=8081&tries=1'] Namespace:pod-network-test-2567 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:15:12.605: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:15:12.749: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:15:12.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2567" for this suite.

• [SLOW TEST:22.453 seconds]
[sig-network] Networking
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1716,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:15:12.757: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename replication-controller
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replication-controller-8478
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:15:15.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8478" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":91,"skipped":1717,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:15:15.920: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2439
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar  6 03:15:18.593: INFO: Successfully updated pod "labelsupdate21c72232-a4f4-45c3-8090-647410d93d42"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:15:20.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2439" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1719,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:15:20.617: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-827
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:15:37.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-827" for this suite.

• [SLOW TEST:17.182 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":93,"skipped":1723,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:15:37.799: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7751
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:15:37.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477" in namespace "projected-7751" to be "success or failure"
Mar  6 03:15:37.960: INFO: Pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064581ms
Mar  6 03:15:39.962: INFO: Pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004226029s
STEP: Saw pod success
Mar  6 03:15:39.962: INFO: Pod "downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477" satisfied condition "success or failure"
Mar  6 03:15:39.964: INFO: Trying to get logs from node worker02 pod downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477 container client-container: 
STEP: delete the pod
Mar  6 03:15:39.977: INFO: Waiting for pod downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477 to disappear
Mar  6 03:15:39.979: INFO: Pod downwardapi-volume-dd162cd5-aa8a-43ca-ae84-d0d57c7f7477 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:15:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7751" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1725,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:15:39.986: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-6822
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:15:40.124: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Mar  6 03:15:51.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 create -f -'
Mar  6 03:16:01.621: INFO: stderr: ""
Mar  6 03:16:01.621: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar  6 03:16:01.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 delete e2e-test-crd-publish-openapi-5310-crds test-foo'
Mar  6 03:16:16.717: INFO: stderr: ""
Mar  6 03:16:16.717: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Mar  6 03:16:16.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 apply -f -'
Mar  6 03:16:21.957: INFO: stderr: ""
Mar  6 03:16:21.957: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar  6 03:16:21.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 delete e2e-test-crd-publish-openapi-5310-crds test-foo'
Mar  6 03:16:37.039: INFO: stderr: ""
Mar  6 03:16:37.039: INFO: stdout: "e2e-test-crd-publish-openapi-5310-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Mar  6 03:16:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 create -f -'
Mar  6 03:16:37.225: INFO: rc: 1
Mar  6 03:16:37.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 apply -f -'
Mar  6 03:16:37.405: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Mar  6 03:16:37.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 create -f -'
Mar  6 03:16:37.593: INFO: rc: 1
Mar  6 03:16:37.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 --namespace=crd-publish-openapi-6822 apply -f -'
Mar  6 03:16:37.784: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Mar  6 03:16:37.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds'
Mar  6 03:16:52.985: INFO: stderr: ""
Mar  6 03:16:52.985: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5310-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Mar  6 03:16:52.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.metadata'
Mar  6 03:17:08.201: INFO: stderr: ""
Mar  6 03:17:08.201: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5310-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Mar  6 03:17:08.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.spec'
Mar  6 03:17:23.358: INFO: stderr: ""
Mar  6 03:17:23.358: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5310-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar  6 03:17:23.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.spec.bars'
Mar  6 03:17:38.515: INFO: stderr: ""
Mar  6 03:17:38.515: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5310-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar  6 03:17:38.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 explain e2e-test-crd-publish-openapi-5310-crds.spec.bars2'
Mar  6 03:17:53.726: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:18:21.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6822" for this suite.

• [SLOW TEST:161.553 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":95,"skipped":1749,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:18:21.540: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-9621
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar  6 03:18:21.675: INFO: Waiting up to 5m0s for pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8" in namespace "emptydir-9621" to be "success or failure"
Mar  6 03:18:21.677: INFO: Pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303795ms
Mar  6 03:18:23.680: INFO: Pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004765868s
STEP: Saw pod success
Mar  6 03:18:23.680: INFO: Pod "pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8" satisfied condition "success or failure"
Mar  6 03:18:23.682: INFO: Trying to get logs from node worker02 pod pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8 container test-container: 
STEP: delete the pod
Mar  6 03:18:23.702: INFO: Waiting for pod pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8 to disappear
Mar  6 03:18:23.713: INFO: Pod pod-612a6d6f-43f0-4f98-a1e1-716a8aff25d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:18:23.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9621" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1754,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:18:23.725: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7937
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:18:23.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e" in namespace "projected-7937" to be "success or failure"
Mar  6 03:18:23.860: INFO: Pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339379ms
Mar  6 03:18:25.862: INFO: Pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004633342s
STEP: Saw pod success
Mar  6 03:18:25.862: INFO: Pod "downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e" satisfied condition "success or failure"
Mar  6 03:18:25.866: INFO: Trying to get logs from node worker02 pod downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e container client-container: 
STEP: delete the pod
Mar  6 03:18:25.884: INFO: Waiting for pod downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e to disappear
Mar  6 03:18:25.887: INFO: Pod downwardapi-volume-89dafbfc-5654-44d6-b062-ca4107ad419e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:18:25.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7937" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1765,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:18:25.910: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-8452
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-dfc42c2f-3cdc-41aa-8164-ac4005b2ec1b
STEP: Creating a pod to test consume configMaps
Mar  6 03:18:26.054: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75" in namespace "configmap-8452" to be "success or failure"
Mar  6 03:18:26.056: INFO: Pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75": Phase="Pending", Reason="", readiness=false. Elapsed: 1.835541ms
Mar  6 03:18:28.058: INFO: Pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004206937s
STEP: Saw pod success
Mar  6 03:18:28.058: INFO: Pod "pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75" satisfied condition "success or failure"
Mar  6 03:18:28.060: INFO: Trying to get logs from node worker02 pod pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75 container configmap-volume-test: 
STEP: delete the pod
Mar  6 03:18:28.074: INFO: Waiting for pod pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75 to disappear
Mar  6 03:18:28.075: INFO: Pod pod-configmaps-a9e28c84-3dfe-4797-9435-e40b4a2ede75 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:18:28.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8452" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1765,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:18:28.082: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-7745
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Mar  6 03:18:28.215: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Mar  6 03:19:03.392: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:19:36.220: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:20:19.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7745" for this suite.

• [SLOW TEST:111.638 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":99,"skipped":1769,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:20:19.721: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-180
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secret-namespace-1050
STEP: Creating secret with name secret-test-7f565658-ea02-4706-9f03-0d9924431fbc
STEP: Creating a pod to test consume secrets
Mar  6 03:20:19.996: INFO: Waiting up to 5m0s for pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a" in namespace "secrets-180" to be "success or failure"
Mar  6 03:20:19.998: INFO: Pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032907ms
Mar  6 03:20:22.001: INFO: Pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004349115s
STEP: Saw pod success
Mar  6 03:20:22.001: INFO: Pod "pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a" satisfied condition "success or failure"
Mar  6 03:20:22.002: INFO: Trying to get logs from node worker02 pod pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a container secret-volume-test: 
STEP: delete the pod
Mar  6 03:20:22.026: INFO: Waiting for pod pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a to disappear
Mar  6 03:20:22.027: INFO: Pod pod-secrets-6a017ce8-c0c9-4a88-bec3-ea8624502a5a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:20:22.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-180" for this suite.
STEP: Destroying namespace "secret-namespace-1050" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1785,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:20:22.041: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5721
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar  6 03:20:22.178: INFO: Waiting up to 5m0s for pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b" in namespace "emptydir-5721" to be "success or failure"
Mar  6 03:20:22.180: INFO: Pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469775ms
Mar  6 03:20:24.185: INFO: Pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006845277s
STEP: Saw pod success
Mar  6 03:20:24.185: INFO: Pod "pod-956618cb-990c-4edd-8a20-3ebf3433df4b" satisfied condition "success or failure"
Mar  6 03:20:24.187: INFO: Trying to get logs from node worker02 pod pod-956618cb-990c-4edd-8a20-3ebf3433df4b container test-container: 
STEP: delete the pod
Mar  6 03:20:24.233: INFO: Waiting for pod pod-956618cb-990c-4edd-8a20-3ebf3433df4b to disappear
Mar  6 03:20:24.236: INFO: Pod pod-956618cb-990c-4edd-8a20-3ebf3433df4b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:20:24.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5721" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1809,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:20:24.245: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2328
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-b206ea0e-f179-451c-a73f-1084efcd0745
STEP: Creating a pod to test consume secrets
Mar  6 03:20:24.387: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db" in namespace "projected-2328" to be "success or failure"
Mar  6 03:20:24.391: INFO: Pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.622879ms
Mar  6 03:20:26.393: INFO: Pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005886992s
STEP: Saw pod success
Mar  6 03:20:26.393: INFO: Pod "pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db" satisfied condition "success or failure"
Mar  6 03:20:26.396: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db container secret-volume-test: 
STEP: delete the pod
Mar  6 03:20:26.414: INFO: Waiting for pod pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db to disappear
Mar  6 03:20:26.433: INFO: Pod pod-projected-secrets-4990dff1-e9d9-42d4-b394-02747ee9f7db no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:20:26.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2328" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1830,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:20:26.446: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-7545
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:20:27.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:20:29.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719061627, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:20:32.323: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
Mar  6 03:20:42.340: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:20:52.449: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:21:02.550: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:21:12.650: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:21:22.658: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:21:22.658: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-7545".
STEP: Found 6 events.
Mar  6 03:21:22.661: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {default-scheduler } Scheduled: Successfully assigned webhook-7545/sample-webhook-deployment-5f65f8c764-p4tgc to worker02
Mar  6 03:21:22.661: INFO: At 2020-03-06 03:20:27 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:21:22.661: INFO: At 2020-03-06 03:20:27 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-p4tgc
Mar  6 03:21:22.661: INFO: At 2020-03-06 03:20:27 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:21:22.661: INFO: At 2020-03-06 03:20:28 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:21:22.661: INFO: At 2020-03-06 03:20:28 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-p4tgc: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:21:22.664: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:21:22.664: INFO: sample-webhook-deployment-5f65f8c764-p4tgc  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:20:27 +0000 UTC  }]
Mar  6 03:21:22.664: INFO: 
Mar  6 03:21:22.666: INFO: 
Logging node info for node master01
Mar  6 03:21:22.669: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 14604 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:19:00 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:21:22.669: INFO: 
Logging kubelet events for node master01
Mar  6 03:21:22.673: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:21:22.683: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.683: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:21:22.683: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:21:22.683: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.683: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:21:22.683: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.683: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:21:22.683: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.683: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:21:22.683: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.683: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:21:22.683: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:21:22.683: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:21:22.683: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:21:22.686205      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:21:22.704: INFO: 
Latency metrics for node master01
Mar  6 03:21:22.704: INFO: 
Logging node info for node master02
Mar  6 03:21:22.706: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 14587 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:21:22.706: INFO: 
Logging kubelet events for node master02
Mar  6 03:21:22.710: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:21:22.724: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:21:22.724: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:21:22.724: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:21:22.724: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:21:22.724: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:21:22.724: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:21:22.724: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:21:22.724: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:21:22.724: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:21:22.724: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:21:22.727174      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:21:22.748: INFO: 
Latency metrics for node master02
Mar  6 03:21:22.748: INFO: 
Logging node info for node master03
Mar  6 03:21:22.756: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 14588 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:18:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:21:22.757: INFO: 
Logging kubelet events for node master03
Mar  6 03:21:22.761: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:21:22.772: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:21:22.772: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:21:22.772: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:21:22.772: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 03:21:22.772: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:21:22.772: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:21:22.772: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:21:22.772: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:21:22.772: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:21:22.772: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:21:22.772: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:21:22.772: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:21:22.774437      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:21:22.801: INFO: 
Latency metrics for node master03
Mar  6 03:21:22.801: INFO: 
Logging node info for node worker01
Mar  6 03:21:22.803: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 14805 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:21:22.803: INFO: 
Logging kubelet events for node worker01
Mar  6 03:21:22.807: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:21:22.819: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:21:22.819: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:21:22.819: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:21:22.819: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:21:22.819: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:21:22.819: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:21:22.819: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:21:22.819: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:21:22.819: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:21:22.819: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:21:22.819: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:21:22.819: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:21:22.819: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:21:22.819: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.819: INFO: 	Container kuard ready: true, restart count 0
W0306 03:21:22.821797      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:21:22.838: INFO: 
Latency metrics for node worker01
Mar  6 03:21:22.838: INFO: 
Logging node info for node worker02
Mar  6 03:21:22.840: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 14565 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:18:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:21:22.841: INFO: 
Logging kubelet events for node worker02
Mar  6 03:21:22.844: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:21:22.848: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:21:22.848: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:21:22.848: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:21:22.848: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:21:22.848: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:21:22.848: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:21:22.848: INFO: sample-webhook-deployment-5f65f8c764-p4tgc started at 2020-03-06 03:20:27 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 03:21:22.848: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:21:22.848: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:21:22.848: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:21:22.848: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:21:22.848: INFO: 	Container kube-proxy ready: true, restart count 1
W0306 03:21:22.850683      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:21:22.870: INFO: 
Latency metrics for node worker02
Mar  6 03:21:22.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7545" for this suite.
STEP: Destroying namespace "webhook-7545-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [56.489 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:21:22.658: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:963
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":102,"skipped":1838,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:21:22.935: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7644
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-03efa1d3-f3fc-480d-87f5-f76a1ca41b67
STEP: Creating a pod to test consume configMaps
Mar  6 03:21:23.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa" in namespace "projected-7644" to be "success or failure"
Mar  6 03:21:23.096: INFO: Pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.996753ms
Mar  6 03:21:25.098: INFO: Pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004453226s
STEP: Saw pod success
Mar  6 03:21:25.098: INFO: Pod "pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa" satisfied condition "success or failure"
Mar  6 03:21:25.103: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 03:21:25.125: INFO: Waiting for pod pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa to disappear
Mar  6 03:21:25.127: INFO: Pod pod-projected-configmaps-ce84443e-6d6e-439d-8fe4-87f9f54b89fa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:21:25.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7644" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1866,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:21:25.136: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-5556
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-ea657bbb-a296-444c-b56e-9d02d26887a1
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:21:25.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5556" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":104,"skipped":1919,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:21:25.272: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-792
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-792;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-792;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-792.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-792.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-792.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-792;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-792;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-792.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-792.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-792.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-792.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-792.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-792.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-792.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-792.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.204.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.204.71_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:21:29.469: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.471: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.473: INFO: Unable to read wheezy_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.478: INFO: Unable to read wheezy_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.480: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.483: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.485: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.498: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.500: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.504: INFO: Unable to read jessie_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.509: INFO: Unable to read jessie_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.520: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:29.532: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-792 wheezy_tcp@dns-test-service.dns-792 wheezy_udp@dns-test-service.dns-792.svc wheezy_tcp@dns-test-service.dns-792.svc wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-792 jessie_tcp@dns-test-service.dns-792 jessie_udp@dns-test-service.dns-792.svc jessie_tcp@dns-test-service.dns-792.svc jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc]

Mar  6 03:21:34.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.542: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.545: INFO: Unable to read wheezy_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.547: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.549: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.566: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.568: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.570: INFO: Unable to read jessie_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.572: INFO: Unable to read jessie_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.574: INFO: Unable to read jessie_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.576: INFO: Unable to read jessie_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.578: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.580: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:34.592: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-792 wheezy_tcp@dns-test-service.dns-792 wheezy_udp@dns-test-service.dns-792.svc wheezy_tcp@dns-test-service.dns-792.svc wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-792 jessie_tcp@dns-test-service.dns-792 jessie_udp@dns-test-service.dns-792.svc jessie_tcp@dns-test-service.dns-792.svc jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc]

Mar  6 03:21:39.541: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.544: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.548: INFO: Unable to read wheezy_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.551: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.563: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.568: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.582: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.584: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.586: INFO: Unable to read jessie_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.589: INFO: Unable to read jessie_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.590: INFO: Unable to read jessie_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.596: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.598: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:39.613: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-792 wheezy_tcp@dns-test-service.dns-792 wheezy_udp@dns-test-service.dns-792.svc wheezy_tcp@dns-test-service.dns-792.svc wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-792 jessie_tcp@dns-test-service.dns-792 jessie_udp@dns-test-service.dns-792.svc jessie_tcp@dns-test-service.dns-792.svc jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc]

Mar  6 03:21:44.535: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.542: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.545: INFO: Unable to read wheezy_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.547: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.549: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.564: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.566: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.568: INFO: Unable to read jessie_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.572: INFO: Unable to read jessie_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.575: INFO: Unable to read jessie_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.580: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.582: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:44.596: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-792 wheezy_tcp@dns-test-service.dns-792 wheezy_udp@dns-test-service.dns-792.svc wheezy_tcp@dns-test-service.dns-792.svc wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-792 jessie_tcp@dns-test-service.dns-792 jessie_udp@dns-test-service.dns-792.svc jessie_tcp@dns-test-service.dns-792.svc jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc]

Mar  6 03:21:49.536: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.547: INFO: Unable to read wheezy_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.549: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.551: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.556: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.571: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.574: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.576: INFO: Unable to read jessie_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.580: INFO: Unable to read jessie_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.582: INFO: Unable to read jessie_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.584: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:49.598: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-792 wheezy_tcp@dns-test-service.dns-792 wheezy_udp@dns-test-service.dns-792.svc wheezy_tcp@dns-test-service.dns-792.svc wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-792 jessie_tcp@dns-test-service.dns-792 jessie_udp@dns-test-service.dns-792.svc jessie_tcp@dns-test-service.dns-792.svc jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc]

Mar  6 03:21:54.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.540: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.542: INFO: Unable to read wheezy_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.544: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.548: INFO: Unable to read wheezy_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.550: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.552: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.567: INFO: Unable to read jessie_udp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.568: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.571: INFO: Unable to read jessie_udp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.573: INFO: Unable to read jessie_tcp@dns-test-service.dns-792 from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.575: INFO: Unable to read jessie_udp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.577: INFO: Unable to read jessie_tcp@dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.579: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.581: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-792.svc from pod dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad: the server could not find the requested resource (get pods dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad)
Mar  6 03:21:54.592: INFO: Lookups using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-792 wheezy_tcp@dns-test-service.dns-792 wheezy_udp@dns-test-service.dns-792.svc wheezy_tcp@dns-test-service.dns-792.svc wheezy_udp@_http._tcp.dns-test-service.dns-792.svc wheezy_tcp@_http._tcp.dns-test-service.dns-792.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-792 jessie_tcp@dns-test-service.dns-792 jessie_udp@dns-test-service.dns-792.svc jessie_tcp@dns-test-service.dns-792.svc jessie_udp@_http._tcp.dns-test-service.dns-792.svc jessie_tcp@_http._tcp.dns-test-service.dns-792.svc]

Mar  6 03:21:59.593: INFO: DNS probes using dns-792/dns-test-75eff2a5-e4f2-4d19-b4c2-d17b8afc9fad succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:21:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-792" for this suite.

• [SLOW TEST:34.406 seconds]
[sig-network] DNS
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":105,"skipped":1929,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:21:59.678: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename namespaces
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-2387
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-895
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3698
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:22:42.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2387" for this suite.
STEP: Destroying namespace "nsdeletetest-895" for this suite.
Mar  6 03:22:42.126: INFO: Namespace nsdeletetest-895 was already deleted
STEP: Destroying namespace "nsdeletetest-3698" for this suite.

• [SLOW TEST:42.451 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":106,"skipped":1949,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:22:42.129: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8752
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8752.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8752.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:22:44.296: INFO: DNS probes using dns-8752/dns-test-9ce39c05-7091-4cb8-aaa7-dd28c12ad1e2 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:22:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8752" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":107,"skipped":1959,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:22:44.321: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-6558
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar  6 03:22:44.457: INFO: Waiting up to 5m0s for pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0" in namespace "downward-api-6558" to be "success or failure"
Mar  6 03:22:44.459: INFO: Pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.983049ms
Mar  6 03:22:46.461: INFO: Pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004031712s
STEP: Saw pod success
Mar  6 03:22:46.461: INFO: Pod "downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0" satisfied condition "success or failure"
Mar  6 03:22:46.463: INFO: Trying to get logs from node worker02 pod downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0 container dapi-container: 
STEP: delete the pod
Mar  6 03:22:46.475: INFO: Waiting for pod downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0 to disappear
Mar  6 03:22:46.477: INFO: Pod downward-api-34457227-1de7-4c21-ac4c-4cd91732adc0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:22:46.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6558" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1988,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:22:46.484: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-3107
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:22:46.842: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:22:49.876: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:22:49.879: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:23:25.460: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7161-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-3107.svc:9443/crdconvert?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar  6 03:23:55.564: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7161-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-3107.svc:9443/crdconvert?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Mar  6 03:24:25.568: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-7161-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-3107.svc:9443/crdconvert?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Mar  6 03:24:25.568: FAIL: Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "crd-webhook-3107".
STEP: Found 6 events.
Mar  6 03:24:26.082: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {default-scheduler } Scheduled: Successfully assigned crd-webhook-3107/sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd to worker02
Mar  6 03:24:26.082: INFO: At 2020-03-06 03:22:46 +0000 UTC - event for sample-crd-conversion-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-crd-conversion-webhook-deployment-78dcf5dd84 to 1
Mar  6 03:24:26.082: INFO: At 2020-03-06 03:22:46 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84: {replicaset-controller } SuccessfulCreate: Created pod: sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd
Mar  6 03:24:26.082: INFO: At 2020-03-06 03:22:47 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:24:26.082: INFO: At 2020-03-06 03:22:47 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {kubelet worker02} Created: Created container sample-crd-conversion-webhook
Mar  6 03:24:26.082: INFO: At 2020-03-06 03:22:47 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd: {kubelet worker02} Started: Started container sample-crd-conversion-webhook
Mar  6 03:24:26.084: INFO: POD                                                        NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:24:26.084: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:22:46 +0000 UTC  }]
Mar  6 03:24:26.084: INFO: 
Mar  6 03:24:26.087: INFO: 
Logging node info for node master01
Mar  6 03:24:26.089: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 16254 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:24:01 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:24:26.089: INFO: 
Logging kubelet events for node master01
Mar  6 03:24:26.093: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:24:26.102: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:24:26.102: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:24:26.102: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:24:26.102: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.102: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:24:26.102: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:24:26.102: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.102: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:24:26.102: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.102: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:24:26.102: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.102: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:24:26.102: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.102: INFO: 	Container kube-scheduler ready: true, restart count 1
W0306 03:24:26.105533      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:24:26.125: INFO: 
Latency metrics for node master01
Mar  6 03:24:26.125: INFO: 
Logging node info for node master02
Mar  6 03:24:26.126: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 16243 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:24:26.127: INFO: 
Logging kubelet events for node master02
Mar  6 03:24:26.130: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:24:26.142: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:24:26.142: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:24:26.142: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:24:26.142: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:24:26.142: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:24:26.142: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:24:26.142: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:24:26.142: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:24:26.142: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.142: INFO: 	Container kube-controller-manager ready: true, restart count 1
W0306 03:24:26.148940      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:24:26.165: INFO: 
Latency metrics for node master02
Mar  6 03:24:26.165: INFO: 
Logging node info for node master03
Mar  6 03:24:26.167: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 16244 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:23:58 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:24:26.167: INFO: 
Logging kubelet events for node master03
Mar  6 03:24:26.171: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:24:26.187: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:24:26.187: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:24:26.187: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 03:24:26.187: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:24:26.187: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:24:26.187: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:24:26.187: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:24:26.187: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:24:26.187: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:24:26.187: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:24:26.187: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.187: INFO: 	Container kube-apiserver ready: true, restart count 0
W0306 03:24:26.193419      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:24:26.216: INFO: 
Latency metrics for node master03
Mar  6 03:24:26.216: INFO: 
Logging node info for node worker01
Mar  6 03:24:26.218: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 14805 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:19:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:24:26.218: INFO: 
Logging kubelet events for node worker01
Mar  6 03:24:26.223: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:24:26.233: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:24:26.233: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:24:26.233: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:24:26.233: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:24:26.233: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:24:26.233: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:24:26.233: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:24:26.233: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:24:26.233: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:24:26.233: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:24:26.233: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:24:26.233: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:24:26.233: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:24:26.233: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.233: INFO: 	Container metrics-server ready: true, restart count 0
W0306 03:24:26.236428      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:24:26.255: INFO: 
Latency metrics for node worker01
Mar  6 03:24:26.255: INFO: 
Logging node info for node worker02
Mar  6 03:24:26.257: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 16224 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:23:52 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:24:26.258: INFO: 
Logging kubelet events for node worker02
Mar  6 03:24:26.261: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:24:26.271: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:24:26.271: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:24:26.271: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:24:26.271: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:24:26.271: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:24:26.271: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:24:26.271: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:24:26.271: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:24:26.271: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-sf2fd started at 2020-03-06 03:22:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Container sample-crd-conversion-webhook ready: true, restart count 0
Mar  6 03:24:26.271: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:24:26.271: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:24:26.271: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 03:24:26.273848      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:24:26.300: INFO: 
Latency metrics for node worker02
Mar  6 03:24:26.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3107" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• Failure [99.904 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:24:25.568: Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:493
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":108,"skipped":1988,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:24:26.388: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-6523
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6523
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Mar  6 03:24:26.546: INFO: Found 0 stateful pods, waiting for 3
Mar  6 03:24:36.548: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 03:24:36.548: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 03:24:36.548: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar  6 03:24:36.569: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Mar  6 03:24:46.594: INFO: Updating stateful set ss2
Mar  6 03:24:46.602: INFO: Waiting for Pod statefulset-6523/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Mar  6 03:24:56.642: INFO: Found 2 stateful pods, waiting for 3
Mar  6 03:25:06.645: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 03:25:06.645: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 03:25:06.645: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Mar  6 03:25:06.663: INFO: Updating stateful set ss2
Mar  6 03:25:06.667: INFO: Waiting for Pod statefulset-6523/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar  6 03:25:16.686: INFO: Updating stateful set ss2
Mar  6 03:25:16.691: INFO: Waiting for StatefulSet statefulset-6523/ss2 to complete update
Mar  6 03:25:16.691: INFO: Waiting for Pod statefulset-6523/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar  6 03:25:26.695: INFO: Deleting all statefulset in ns statefulset-6523
Mar  6 03:25:26.698: INFO: Scaling statefulset ss2 to 0
Mar  6 03:25:56.713: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 03:25:56.718: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:25:56.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6523" for this suite.

• [SLOW TEST:90.347 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":109,"skipped":2008,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:25:56.735: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-2668
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:25:58.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2668" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":2014,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:25:58.894: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-2216
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar  6 03:26:03.060: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:03.062: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:05.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:05.065: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:07.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:07.066: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:09.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:09.065: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:11.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:11.066: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:13.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:13.065: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:15.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:15.065: INFO: Pod pod-with-prestop-http-hook still exists
Mar  6 03:26:17.063: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar  6 03:26:17.065: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:26:17.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2216" for this suite.

• [SLOW TEST:18.188 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":2034,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:26:17.082: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-9945
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-ed365452-1e34-41bc-90bc-fd677e134e14
STEP: Creating a pod to test consume secrets
Mar  6 03:26:17.223: INFO: Waiting up to 5m0s for pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3" in namespace "secrets-9945" to be "success or failure"
Mar  6 03:26:17.225: INFO: Pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196762ms
Mar  6 03:26:19.228: INFO: Pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004860122s
STEP: Saw pod success
Mar  6 03:26:19.228: INFO: Pod "pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3" satisfied condition "success or failure"
Mar  6 03:26:19.230: INFO: Trying to get logs from node worker02 pod pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3 container secret-volume-test: 
STEP: delete the pod
Mar  6 03:26:19.243: INFO: Waiting for pod pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3 to disappear
Mar  6 03:26:19.245: INFO: Pod pod-secrets-40ea5a82-96ab-446a-8038-d545c78700a3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:26:19.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9945" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":2034,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:26:19.251: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-485
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:26:30.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-485" for this suite.

• [SLOW TEST:11.208 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":113,"skipped":2039,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:26:30.459: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8679
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar  6 03:26:30.592: INFO: Waiting up to 5m0s for pod "pod-76d03f27-cab3-4010-8090-2d88690b9dea" in namespace "emptydir-8679" to be "success or failure"
Mar  6 03:26:30.594: INFO: Pod "pod-76d03f27-cab3-4010-8090-2d88690b9dea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025446ms
Mar  6 03:26:32.597: INFO: Pod "pod-76d03f27-cab3-4010-8090-2d88690b9dea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004875639s
STEP: Saw pod success
Mar  6 03:26:32.597: INFO: Pod "pod-76d03f27-cab3-4010-8090-2d88690b9dea" satisfied condition "success or failure"
Mar  6 03:26:32.599: INFO: Trying to get logs from node worker02 pod pod-76d03f27-cab3-4010-8090-2d88690b9dea container test-container: 
STEP: delete the pod
Mar  6 03:26:32.621: INFO: Waiting for pod pod-76d03f27-cab3-4010-8090-2d88690b9dea to disappear
Mar  6 03:26:32.623: INFO: Pod pod-76d03f27-cab3-4010-8090-2d88690b9dea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:26:32.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8679" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":2053,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:26:32.630: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-3269
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3269
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3269
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3269
Mar  6 03:26:32.771: INFO: Found 0 stateful pods, waiting for 1
Mar  6 03:26:42.773: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Mar  6 03:26:42.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 03:26:52.977: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 03:26:52.977: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 03:26:52.977: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 03:26:52.979: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar  6 03:27:02.982: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 03:27:02.982: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 03:27:02.990: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999984s
Mar  6 03:27:03.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997863881s
Mar  6 03:27:04.996: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994548038s
Mar  6 03:27:05.999: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.991796876s
Mar  6 03:27:07.003: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.989248162s
Mar  6 03:27:08.007: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.985231551s
Mar  6 03:27:09.013: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.981078852s
Mar  6 03:27:10.015: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.975233933s
Mar  6 03:27:11.018: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.972770927s
Mar  6 03:27:12.021: INFO: Verifying statefulset ss doesn't scale past 1 for another 969.884872ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3269
Mar  6 03:27:13.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 03:27:13.202: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 03:27:13.202: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 03:27:13.202: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 03:27:13.207: INFO: Found 1 stateful pods, waiting for 3
Mar  6 03:27:23.210: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 03:27:23.210: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar  6 03:27:23.210: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Mar  6 03:27:23.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 03:27:23.409: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 03:27:23.409: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 03:27:23.409: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 03:27:23.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 03:27:23.618: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 03:27:23.618: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 03:27:23.618: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 03:27:23.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar  6 03:27:23.797: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Mar  6 03:27:23.797: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar  6 03:27:23.797: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Mar  6 03:27:23.797: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 03:27:23.799: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Mar  6 03:27:33.805: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 03:27:33.805: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 03:27:33.805: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar  6 03:27:33.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999744s
Mar  6 03:27:34.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991511788s
Mar  6 03:27:35.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988590314s
Mar  6 03:27:36.829: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985901443s
Mar  6 03:27:37.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982907787s
Mar  6 03:27:38.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980071164s
Mar  6 03:27:39.837: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.977417305s
Mar  6 03:27:40.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.974840529s
Mar  6 03:27:41.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.972296963s
Mar  6 03:27:42.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 968.54371ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3269
Mar  6 03:27:43.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 03:27:44.020: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 03:27:44.020: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 03:27:44.020: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 03:27:44.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 03:27:44.224: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 03:27:44.224: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 03:27:44.224: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 03:27:44.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=statefulset-3269 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar  6 03:27:44.402: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar  6 03:27:44.402: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar  6 03:27:44.402: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar  6 03:27:44.402: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar  6 03:28:14.416: INFO: Deleting all statefulset in ns statefulset-3269
Mar  6 03:28:14.418: INFO: Scaling statefulset ss to 0
Mar  6 03:28:14.424: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 03:28:14.426: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:28:14.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3269" for this suite.

• [SLOW TEST:101.811 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":115,"skipped":2058,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:28:14.442: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2532
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[]
Mar  6 03:28:14.603: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[] (2.660509ms elapsed)
STEP: Creating pod pod1 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[pod1:[80]]
Mar  6 03:28:15.636: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[pod1:[80]] (1.014076062s elapsed)
STEP: Creating pod pod2 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[pod1:[80] pod2:[80]]
Mar  6 03:28:17.660: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[pod1:[80] pod2:[80]] (2.021303392s elapsed)
STEP: Deleting pod pod1 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[pod2:[80]]
Mar  6 03:28:18.675: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[pod2:[80]] (1.010843009s elapsed)
STEP: Deleting pod pod2 in namespace services-2532
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2532 to expose endpoints map[]
Mar  6 03:28:19.684: INFO: successfully validated that service endpoint-test2 in namespace services-2532 exposes endpoints map[] (1.004704376s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:28:19.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2532" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:5.302 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":116,"skipped":2091,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:28:19.744: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-2067
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-5a8082d5-4afc-4447-87d5-cf998c05c8d5
STEP: Creating a pod to test consume secrets
Mar  6 03:28:19.894: INFO: Waiting up to 5m0s for pod "pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132" in namespace "secrets-2067" to be "success or failure"
Mar  6 03:28:19.896: INFO: Pod "pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132": Phase="Pending", Reason="", readiness=false. Elapsed: 1.928935ms
Mar  6 03:28:21.898: INFO: Pod "pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004285394s
STEP: Saw pod success
Mar  6 03:28:21.898: INFO: Pod "pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132" satisfied condition "success or failure"
Mar  6 03:28:21.900: INFO: Trying to get logs from node worker02 pod pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132 container secret-volume-test: 
STEP: delete the pod
Mar  6 03:28:21.924: INFO: Waiting for pod pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132 to disappear
Mar  6 03:28:21.927: INFO: Pod pod-secrets-3d7526b3-1a26-41af-b581-a02f971c6132 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:28:21.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2067" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":2093,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:28:21.933: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-6786
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:28:22.067: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5ece4e53-b37b-4e98-ba23-184d35ae0847" in namespace "security-context-test-6786" to be "success or failure"
Mar  6 03:28:22.069: INFO: Pod "busybox-privileged-false-5ece4e53-b37b-4e98-ba23-184d35ae0847": Phase="Pending", Reason="", readiness=false. Elapsed: 1.804275ms
Mar  6 03:28:24.071: INFO: Pod "busybox-privileged-false-5ece4e53-b37b-4e98-ba23-184d35ae0847": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004249875s
Mar  6 03:28:24.071: INFO: Pod "busybox-privileged-false-5ece4e53-b37b-4e98-ba23-184d35ae0847" satisfied condition "success or failure"
Mar  6 03:28:24.076: INFO: Got logs for pod "busybox-privileged-false-5ece4e53-b37b-4e98-ba23-184d35ae0847": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:28:24.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6786" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":2125,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:28:24.084: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3824
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:28:24.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:28:27.424: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
Mar  6 03:28:37.441: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:28:47.550: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:28:57.650: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:29:07.751: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:29:17.760: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:29:17.760: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-3824".
STEP: Found 6 events.
Mar  6 03:29:17.768: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2jg5d: {default-scheduler } Scheduled: Successfully assigned webhook-3824/sample-webhook-deployment-5f65f8c764-2jg5d to worker02
Mar  6 03:29:17.768: INFO: At 2020-03-06 03:28:24 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:29:17.768: INFO: At 2020-03-06 03:28:24 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-2jg5d
Mar  6 03:29:17.768: INFO: At 2020-03-06 03:28:25 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2jg5d: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:29:17.768: INFO: At 2020-03-06 03:28:25 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2jg5d: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:29:17.768: INFO: At 2020-03-06 03:28:25 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-2jg5d: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:29:17.770: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:29:17.770: INFO: sample-webhook-deployment-5f65f8c764-2jg5d  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:28:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:28:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:28:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:28:24 +0000 UTC  }]
Mar  6 03:29:17.770: INFO: 
Mar  6 03:29:17.772: INFO: 
Logging node info for node master01
Mar  6 03:29:17.774: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 18002 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:29:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:29:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:29:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:29:02 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:29:17.774: INFO: 
Logging kubelet events for node master01
Mar  6 03:29:17.778: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:29:17.790: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.790: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:29:17.790: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:29:17.790: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.790: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:29:17.790: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.790: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:29:17.790: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.790: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:29:17.790: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.790: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:29:17.790: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:29:17.790: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:29:17.790: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:29:17.792623      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:29:17.812: INFO: 
Latency metrics for node master01
Mar  6 03:29:17.812: INFO: 
Logging node info for node master02
Mar  6 03:29:17.813: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 17988 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:29:17.814: INFO: 
Logging kubelet events for node master02
Mar  6 03:29:17.817: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:29:17.828: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:29:17.828: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:29:17.828: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:29:17.828: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:29:17.828: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:29:17.828: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:29:17.828: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:29:17.828: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:29:17.828: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.828: INFO: 	Container kube-proxy ready: true, restart count 0
W0306 03:29:17.831285      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:29:17.854: INFO: 
Latency metrics for node master02
Mar  6 03:29:17.854: INFO: 
Logging node info for node master03
Mar  6 03:29:17.856: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 17989 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:28:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:29:17.856: INFO: 
Logging kubelet events for node master03
Mar  6 03:29:17.859: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:29:17.870: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:29:17.870: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:29:17.870: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:29:17.870: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 03:29:17.870: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:29:17.870: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:29:17.870: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:29:17.870: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:29:17.870: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:29:17.870: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:29:17.870: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:29:17.870: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:29:17.872592      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:29:17.890: INFO: 
Latency metrics for node master03
Mar  6 03:29:17.890: INFO: 
Logging node info for node worker01
Mar  6 03:29:17.892: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 16510 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:24:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:24:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:29:17.892: INFO: 
Logging kubelet events for node worker01
Mar  6 03:29:17.896: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:29:17.907: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:29:17.907: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:29:17.907: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:29:17.907: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:29:17.907: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:29:17.907: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:29:17.907: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:29:17.907: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:29:17.907: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:29:17.907: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:29:17.907: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:29:17.907: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:29:17.907: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:29:17.907: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:29:17.907: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:29:17.910041      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:29:17.934: INFO: 
Latency metrics for node worker01
Mar  6 03:29:17.934: INFO: 
Logging node info for node worker02
Mar  6 03:29:17.936: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 17969 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:28:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:28:53 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:29:17.936: INFO: 
Logging kubelet events for node worker02
Mar  6 03:29:17.940: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:29:17.944: INFO: sample-webhook-deployment-5f65f8c764-2jg5d started at 2020-03-06 03:28:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 03:29:17.944: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:29:17.944: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:29:17.944: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:29:17.944: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:29:17.944: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:29:17.944: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:29:17.944: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:29:17.944: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:29:17.944: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:29:17.944: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:29:17.944: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 03:29:17.946551      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:29:17.969: INFO: 
Latency metrics for node worker02
Mar  6 03:29:17.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3824" for this suite.
STEP: Destroying namespace "webhook-3824-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [53.949 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:29:17.760: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:528
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":118,"skipped":2157,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:29:18.033: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6929
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Mar  6 03:29:18.176: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-780690759 proxy --unix-socket=/tmp/kubectl-proxy-unix672723822/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:29:18.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6929" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":119,"skipped":2187,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:29:18.229: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-543
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:29:18.367: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Mar  6 03:29:18.376: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:18.376: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:18.376: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:18.377: INFO: Number of nodes with available pods: 0
Mar  6 03:29:18.377: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:29:19.380: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:19.381: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:19.381: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:19.383: INFO: Number of nodes with available pods: 0
Mar  6 03:29:19.383: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:29:20.380: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:20.380: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:20.380: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:20.382: INFO: Number of nodes with available pods: 2
Mar  6 03:29:20.382: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Mar  6 03:29:20.403: INFO: Wrong image for pod: daemon-set-229zf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:20.403: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:20.408: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:20.408: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:20.408: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:21.416: INFO: Wrong image for pod: daemon-set-229zf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:21.416: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:21.419: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:21.419: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:21.419: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:22.416: INFO: Wrong image for pod: daemon-set-229zf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:22.416: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:22.421: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:22.421: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:22.421: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:23.415: INFO: Wrong image for pod: daemon-set-229zf. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:23.415: INFO: Pod daemon-set-229zf is not available
Mar  6 03:29:23.415: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:23.418: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:23.418: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:23.418: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:24.422: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:24.422: INFO: Pod daemon-set-grr7v is not available
Mar  6 03:29:24.425: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:24.425: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:24.425: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:25.413: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:25.415: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:25.415: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:25.415: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:26.412: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:26.412: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:26.417: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:26.418: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:26.418: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:27.415: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:27.415: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:27.418: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:27.418: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:27.418: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:28.416: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:28.416: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:28.423: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:28.424: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:28.424: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:29.418: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:29.418: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:29.421: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:29.421: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:29.421: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:30.416: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:30.416: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:30.419: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:30.419: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:30.419: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:31.411: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:31.411: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:31.414: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:31.414: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:31.414: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:32.416: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:32.416: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:32.419: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:32.419: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:32.419: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:33.415: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:33.415: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:33.419: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:33.419: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:33.419: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:34.411: INFO: Wrong image for pod: daemon-set-9w4j7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Mar  6 03:29:34.411: INFO: Pod daemon-set-9w4j7 is not available
Mar  6 03:29:34.417: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:34.417: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:34.417: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:35.413: INFO: Pod daemon-set-4xj4d is not available
Mar  6 03:29:35.417: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:35.417: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:35.417: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Mar  6 03:29:35.420: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:35.420: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:35.420: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:35.422: INFO: Number of nodes with available pods: 1
Mar  6 03:29:35.422: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:29:36.425: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:36.425: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:36.425: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:29:36.427: INFO: Number of nodes with available pods: 2
Mar  6 03:29:36.427: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-543, will wait for the garbage collector to delete the pods
Mar  6 03:29:36.495: INFO: Deleting DaemonSet.extensions daemon-set took: 4.725562ms
Mar  6 03:29:36.995: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.124112ms
Mar  6 03:29:45.397: INFO: Number of nodes with available pods: 0
Mar  6 03:29:45.397: INFO: Number of running nodes: 0, number of available pods: 0
Mar  6 03:29:45.399: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-543/daemonsets","resourceVersion":"18257"},"items":null}

Mar  6 03:29:45.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-543/pods","resourceVersion":"18257"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:29:45.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-543" for this suite.

• [SLOW TEST:27.194 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":120,"skipped":2197,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:29:45.423: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1662
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-1662/configmap-test-5c0a818b-5384-4248-85b2-9cd71d99ed8d
STEP: Creating a pod to test consume configMaps
Mar  6 03:29:45.560: INFO: Waiting up to 5m0s for pod "pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a" in namespace "configmap-1662" to be "success or failure"
Mar  6 03:29:45.562: INFO: Pod "pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.917133ms
Mar  6 03:29:47.564: INFO: Pod "pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004561522s
STEP: Saw pod success
Mar  6 03:29:47.564: INFO: Pod "pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a" satisfied condition "success or failure"
Mar  6 03:29:47.566: INFO: Trying to get logs from node worker02 pod pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a container env-test: 
STEP: delete the pod
Mar  6 03:29:47.580: INFO: Waiting for pod pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a to disappear
Mar  6 03:29:47.581: INFO: Pod pod-configmaps-57f3ac81-e9c2-431e-a4f0-3d3e710a4f5a no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:29:47.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1662" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2231,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:29:47.587: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6745
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-f9c15a89-0eca-4b05-91e9-7258f59ba6b0
STEP: Creating a pod to test consume secrets
Mar  6 03:29:47.732: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b" in namespace "projected-6745" to be "success or failure"
Mar  6 03:29:47.735: INFO: Pod "pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275203ms
Mar  6 03:29:49.737: INFO: Pod "pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004865905s
STEP: Saw pod success
Mar  6 03:29:49.737: INFO: Pod "pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b" satisfied condition "success or failure"
Mar  6 03:29:49.739: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b container projected-secret-volume-test: 
STEP: delete the pod
Mar  6 03:29:49.753: INFO: Waiting for pod pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b to disappear
Mar  6 03:29:49.755: INFO: Pod pod-projected-secrets-b20ac681-6254-456c-a17c-635f976e127b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:29:49.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6745" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2232,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:29:49.762: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-1269
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:29:49.898: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d8e0d3d6-6410-453f-81a4-cb1a5652a9b1" in namespace "security-context-test-1269" to be "success or failure"
Mar  6 03:29:49.900: INFO: Pod "alpine-nnp-false-d8e0d3d6-6410-453f-81a4-cb1a5652a9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.874977ms
Mar  6 03:29:51.902: INFO: Pod "alpine-nnp-false-d8e0d3d6-6410-453f-81a4-cb1a5652a9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004074736s
Mar  6 03:29:53.907: INFO: Pod "alpine-nnp-false-d8e0d3d6-6410-453f-81a4-cb1a5652a9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008525692s
Mar  6 03:29:53.907: INFO: Pod "alpine-nnp-false-d8e0d3d6-6410-453f-81a4-cb1a5652a9b1" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:29:53.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1269" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2239,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:29:53.924: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-1133
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-437e9b3a-5193-4ae2-9036-e811a6aaea1f in namespace container-probe-1133
Mar  6 03:29:56.063: INFO: Started pod liveness-437e9b3a-5193-4ae2-9036-e811a6aaea1f in namespace container-probe-1133
STEP: checking the pod's current state and verifying that restartCount is present
Mar  6 03:29:56.065: INFO: Initial restart count of pod liveness-437e9b3a-5193-4ae2-9036-e811a6aaea1f is 0
Mar  6 03:30:12.086: INFO: Restart count of pod container-probe-1133/liveness-437e9b3a-5193-4ae2-9036-e811a6aaea1f is now 1 (16.021099377s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:30:12.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1133" for this suite.

• [SLOW TEST:18.187 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2247,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:30:12.110: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-7657
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:30:14.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7657" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2249,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:30:14.263: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-7245
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Mar  6 03:30:14.397: INFO: Waiting up to 5m0s for pod "pod-7f9f5438-6af8-468b-90b0-584994c3f99c" in namespace "emptydir-7245" to be "success or failure"
Mar  6 03:30:14.399: INFO: Pod "pod-7f9f5438-6af8-468b-90b0-584994c3f99c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.884204ms
Mar  6 03:30:16.403: INFO: Pod "pod-7f9f5438-6af8-468b-90b0-584994c3f99c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006262916s
STEP: Saw pod success
Mar  6 03:30:16.403: INFO: Pod "pod-7f9f5438-6af8-468b-90b0-584994c3f99c" satisfied condition "success or failure"
Mar  6 03:30:16.405: INFO: Trying to get logs from node worker02 pod pod-7f9f5438-6af8-468b-90b0-584994c3f99c container test-container: 
STEP: delete the pod
Mar  6 03:30:16.421: INFO: Waiting for pod pod-7f9f5438-6af8-468b-90b0-584994c3f99c to disappear
Mar  6 03:30:16.423: INFO: Pod pod-7f9f5438-6af8-468b-90b0-584994c3f99c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:30:16.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7245" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2252,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:30:16.430: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-29
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Mar  6 03:30:16.558: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Mar  6 03:30:16.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-29'
Mar  6 03:30:16.757: INFO: stderr: ""
Mar  6 03:30:16.757: INFO: stdout: "service/agnhost-slave created\n"
Mar  6 03:30:16.757: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Mar  6 03:30:16.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-29'
Mar  6 03:30:16.985: INFO: stderr: ""
Mar  6 03:30:16.985: INFO: stdout: "service/agnhost-master created\n"
Mar  6 03:30:16.985: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Mar  6 03:30:16.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-29'
Mar  6 03:30:17.225: INFO: stderr: ""
Mar  6 03:30:17.225: INFO: stdout: "service/frontend created\n"
Mar  6 03:30:17.225: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Mar  6 03:30:17.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-29'
Mar  6 03:30:17.404: INFO: stderr: ""
Mar  6 03:30:17.404: INFO: stdout: "deployment.apps/frontend created\n"
Mar  6 03:30:17.404: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar  6 03:30:17.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-29'
Mar  6 03:30:17.564: INFO: stderr: ""
Mar  6 03:30:17.564: INFO: stdout: "deployment.apps/agnhost-master created\n"
Mar  6 03:30:17.564: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Mar  6 03:30:17.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-29'
Mar  6 03:30:17.772: INFO: stderr: ""
Mar  6 03:30:17.772: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar  6 03:30:17.772: INFO: Waiting for all frontend pods to be Running.
Mar  6 03:30:22.822: INFO: Waiting for frontend to serve content.
Mar  6 03:30:22.829: INFO: Trying to add a new entry to the guestbook.
Mar  6 03:30:22.835: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Mar  6 03:30:22.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-29'
Mar  6 03:30:22.940: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:30:22.940: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar  6 03:30:22.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-29'
Mar  6 03:30:23.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:30:23.044: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar  6 03:30:23.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-29'
Mar  6 03:30:23.154: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:30:23.154: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar  6 03:30:23.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-29'
Mar  6 03:30:23.237: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:30:23.237: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar  6 03:30:23.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-29'
Mar  6 03:30:23.302: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:30:23.302: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar  6 03:30:23.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-29'
Mar  6 03:30:23.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:30:23.367: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:30:23.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-29" for this suite.

• [SLOW TEST:6.944 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":127,"skipped":2311,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:30:23.375: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-6116
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-7d97b7d0-f5fd-421d-8933-9d130cb44cf8
STEP: Creating secret with name s-test-opt-upd-85c22add-f9e2-4e44-99bb-c081bea35ab9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-7d97b7d0-f5fd-421d-8933-9d130cb44cf8
STEP: Updating secret s-test-opt-upd-85c22add-f9e2-4e44-99bb-c081bea35ab9
STEP: Creating secret with name s-test-opt-create-99caee68-2703-43df-bd37-ec36b178ba57
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:31:59.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6116" for this suite.

• [SLOW TEST:96.533 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2334,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:31:59.908: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-watch-2634
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:32:00.040: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Creating first CR 
Mar  6 03:32:05.153: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-06T03:32:05Z generation:1 name:name1 resourceVersion:19145 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c26f12d8-1168-4a5d-bc4c-f2b8f3f862dd] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Mar  6 03:32:15.157: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-06T03:32:15Z generation:1 name:name2 resourceVersion:19183 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6b4f9e92-7851-4373-87f3-6cdd386f3b16] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Mar  6 03:32:25.161: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-06T03:32:05Z generation:2 name:name1 resourceVersion:19218 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c26f12d8-1168-4a5d-bc4c-f2b8f3f862dd] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Mar  6 03:32:35.166: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-06T03:32:15Z generation:2 name:name2 resourceVersion:19248 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6b4f9e92-7851-4373-87f3-6cdd386f3b16] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Mar  6 03:32:45.172: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-06T03:32:05Z generation:2 name:name1 resourceVersion:19278 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c26f12d8-1168-4a5d-bc4c-f2b8f3f862dd] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Mar  6 03:32:55.180: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-06T03:32:15Z generation:2 name:name2 resourceVersion:19311 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6b4f9e92-7851-4373-87f3-6cdd386f3b16] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:33:05.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2634" for this suite.

• [SLOW TEST:65.790 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":129,"skipped":2343,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:33:05.698: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-4134
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:33:05.833: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:33:07.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4134" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2377,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:33:07.962: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-4570
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 03:33:08.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4570'
Mar  6 03:33:08.166: INFO: stderr: ""
Mar  6 03:33:08.166: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866
Mar  6 03:33:08.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete pods e2e-test-httpd-pod --namespace=kubectl-4570'
Mar  6 03:33:15.151: INFO: stderr: ""
Mar  6 03:33:15.151: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:33:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4570" for this suite.

• [SLOW TEST:7.196 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":131,"skipped":2384,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:33:15.158: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-8517
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar  6 03:33:15.295: INFO: Waiting up to 5m0s for pod "pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2" in namespace "emptydir-8517" to be "success or failure"
Mar  6 03:33:15.297: INFO: Pod "pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224798ms
Mar  6 03:33:17.299: INFO: Pod "pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004859738s
STEP: Saw pod success
Mar  6 03:33:17.300: INFO: Pod "pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2" satisfied condition "success or failure"
Mar  6 03:33:17.301: INFO: Trying to get logs from node worker02 pod pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2 container test-container: 
STEP: delete the pod
Mar  6 03:33:17.328: INFO: Waiting for pod pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2 to disappear
Mar  6 03:33:17.330: INFO: Pod pod-42d1f99e-e14b-424b-8ed8-d80f4e90a3b2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:33:17.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8517" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2384,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:33:17.337: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3729
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:33:18.275: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:33:21.306: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
Mar  6 03:33:31.329: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:33:41.438: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:33:51.539: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:34:01.641: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:34:11.652: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:34:11.652: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-3729".
STEP: Found 6 events.
Mar  6 03:34:11.655: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d9tzp: {default-scheduler } Scheduled: Successfully assigned webhook-3729/sample-webhook-deployment-5f65f8c764-d9tzp to worker02
Mar  6 03:34:11.655: INFO: At 2020-03-06 03:33:18 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:34:11.656: INFO: At 2020-03-06 03:33:18 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-d9tzp
Mar  6 03:34:11.656: INFO: At 2020-03-06 03:33:18 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d9tzp: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:34:11.656: INFO: At 2020-03-06 03:33:18 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d9tzp: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:34:11.656: INFO: At 2020-03-06 03:33:19 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-d9tzp: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:34:11.659: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:34:11.659: INFO: sample-webhook-deployment-5f65f8c764-d9tzp  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:33:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:33:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:33:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:33:18 +0000 UTC  }]
Mar  6 03:34:11.659: INFO: 
Mar  6 03:34:11.661: INFO: 
Logging node info for node master01
Mar  6 03:34:11.663: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 19662 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:34:11.664: INFO: 
Logging kubelet events for node master01
Mar  6 03:34:11.667: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:34:11.677: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.677: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:34:11.677: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:34:11.677: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.677: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:34:11.677: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.677: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:34:11.677: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.677: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:34:11.677: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.677: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:34:11.677: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:34:11.677: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:34:11.677: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:34:11.679844      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:34:11.694: INFO: 
Latency metrics for node master01
Mar  6 03:34:11.694: INFO: 
Logging node info for node master02
Mar  6 03:34:11.696: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 19646 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:34:11.696: INFO: 
Logging kubelet events for node master02
Mar  6 03:34:11.699: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:34:11.714: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:34:11.714: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:34:11.714: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:34:11.714: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:34:11.714: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:34:11.714: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:34:11.714: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:34:11.714: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:34:11.714: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.714: INFO: 	Container kube-scheduler ready: true, restart count 1
W0306 03:34:11.717073      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:34:11.734: INFO: 
Latency metrics for node master02
Mar  6 03:34:11.734: INFO: 
Logging node info for node master03
Mar  6 03:34:11.736: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 19651 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:34:11.736: INFO: 
Logging kubelet events for node master03
Mar  6 03:34:11.740: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:34:11.749: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:34:11.749: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:34:11.749: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:34:11.749: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:34:11.749: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:34:11.749: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:34:11.749: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:34:11.749: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.749: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:34:11.749: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.750: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:34:11.750: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:34:11.750: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.750: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:34:11.752411      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:34:11.769: INFO: 
Latency metrics for node master03
Mar  6 03:34:11.769: INFO: 
Logging node info for node worker01
Mar  6 03:34:11.771: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 18396 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:29:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:29:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:29:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:29:54 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:34:11.771: INFO: 
Logging kubelet events for node worker01
Mar  6 03:34:11.775: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:34:11.785: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:34:11.785: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:34:11.785: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:34:11.785: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:34:11.785: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:34:11.785: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:34:11.785: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:34:11.785: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:34:11.785: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:34:11.785: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:34:11.785: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:34:11.785: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:34:11.785: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:34:11.785: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.785: INFO: 	Container kuard ready: true, restart count 0
W0306 03:34:11.788128      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:34:11.806: INFO: 
Latency metrics for node worker01
Mar  6 03:34:11.806: INFO: 
Logging node info for node worker02
Mar  6 03:34:11.808: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 18724 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:30:23 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:30:23 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:30:23 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:30:23 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:34:11.808: INFO: 
Logging kubelet events for node worker02
Mar  6 03:34:11.812: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:34:11.819: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:34:11.819: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:34:11.819: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:34:11.819: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:34:11.819: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:34:11.819: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:34:11.819: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:34:11.819: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:34:11.819: INFO: sample-webhook-deployment-5f65f8c764-d9tzp started at 2020-03-06 03:33:18 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 03:34:11.819: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:34:11.819: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:34:11.819: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 03:34:11.822144      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:34:11.839: INFO: 
Latency metrics for node worker02
Mar  6 03:34:11.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3729" for this suite.
STEP: Destroying namespace "webhook-3729-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [54.570 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:34:11.652: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1389
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":132,"skipped":2439,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:34:11.907: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-189
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:34:12.886: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:34:15.905: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:34:15.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-189" for this suite.
STEP: Destroying namespace "webhook-189-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":133,"skipped":2442,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:34:15.980: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7291
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-44f51e9b-b5d6-467a-8983-7f53792fb87b
STEP: Creating a pod to test consume secrets
Mar  6 03:34:16.138: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe" in namespace "projected-7291" to be "success or failure"
Mar  6 03:34:16.140: INFO: Pod "pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164122ms
Mar  6 03:34:18.142: INFO: Pod "pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004333224s
STEP: Saw pod success
Mar  6 03:34:18.142: INFO: Pod "pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe" satisfied condition "success or failure"
Mar  6 03:34:18.144: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe container projected-secret-volume-test: 
STEP: delete the pod
Mar  6 03:34:18.158: INFO: Waiting for pod pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe to disappear
Mar  6 03:34:18.161: INFO: Pod pod-projected-secrets-d8a68eac-21f6-44ff-a3dd-195bcacf3cfe no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:34:18.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7291" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2445,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:34:18.168: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-webhook-2895
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:34:18.600: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Mar  6 03:34:20.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062458, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062458, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062458, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062458, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:34:23.627: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:34:23.631: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:34:58.737: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-592-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-2895.svc:9443/crdconvert?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar  6 03:35:28.841: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-592-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-2895.svc:9443/crdconvert?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar  6 03:35:58.846: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-592-crd failed: Post https://e2e-test-crd-conversion-webhook.crd-webhook-2895.svc:9443/crdconvert?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Mar  6 03:35:58.846: FAIL: Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "crd-webhook-2895".
STEP: Found 6 events.
Mar  6 03:35:59.359: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5: {default-scheduler } Scheduled: Successfully assigned crd-webhook-2895/sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5 to worker02
Mar  6 03:35:59.359: INFO: At 2020-03-06 03:34:18 +0000 UTC - event for sample-crd-conversion-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-crd-conversion-webhook-deployment-78dcf5dd84 to 1
Mar  6 03:35:59.359: INFO: At 2020-03-06 03:34:18 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84: {replicaset-controller } SuccessfulCreate: Created pod: sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5
Mar  6 03:35:59.359: INFO: At 2020-03-06 03:34:19 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:35:59.359: INFO: At 2020-03-06 03:34:19 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5: {kubelet worker02} Created: Created container sample-crd-conversion-webhook
Mar  6 03:35:59.359: INFO: At 2020-03-06 03:34:19 +0000 UTC - event for sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5: {kubelet worker02} Started: Started container sample-crd-conversion-webhook
Mar  6 03:35:59.361: INFO: POD                                                        NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:35:59.361: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:34:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:34:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:34:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:34:18 +0000 UTC  }]
Mar  6 03:35:59.361: INFO: 
Mar  6 03:35:59.364: INFO: 
Logging node info for node master01
Mar  6 03:35:59.366: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 19662 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:35:59.366: INFO: 
Logging kubelet events for node master01
Mar  6 03:35:59.370: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:35:59.380: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.380: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:35:59.380: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.380: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:35:59.380: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:35:59.380: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:35:59.380: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:35:59.380: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.380: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:35:59.380: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:35:59.380: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.380: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:35:59.380: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.380: INFO: 	Container kube-apiserver ready: true, restart count 0
W0306 03:35:59.388353      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:35:59.403: INFO: 
Latency metrics for node master01
Mar  6 03:35:59.403: INFO: 
Logging node info for node master02
Mar  6 03:35:59.406: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 19646 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:35:59.407: INFO: 
Logging kubelet events for node master02
Mar  6 03:35:59.411: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:35:59.426: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:35:59.426: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:35:59.426: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:35:59.426: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:35:59.426: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:35:59.426: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:35:59.426: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:35:59.426: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:35:59.426: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:35:59.426: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:35:59.428750      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:35:59.443: INFO: 
Latency metrics for node master02
Mar  6 03:35:59.443: INFO: 
Logging node info for node master03
Mar  6 03:35:59.445: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 19651 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:35:59.445: INFO: 
Logging kubelet events for node master03
Mar  6 03:35:59.449: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:35:59.460: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:35:59.460: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:35:59.460: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:35:59.460: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:35:59.460: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:35:59.460: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:35:59.460: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:35:59.460: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:35:59.460: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:35:59.460: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:35:59.460: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.460: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:35:59.463586      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:35:59.480: INFO: 
Latency metrics for node master03
Mar  6 03:35:59.480: INFO: 
Logging node info for node worker01
Mar  6 03:35:59.482: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 19979 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:35:59.482: INFO: 
Logging kubelet events for node worker01
Mar  6 03:35:59.486: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:35:59.497: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:35:59.497: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:35:59.497: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:35:59.497: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:35:59.497: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:35:59.497: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:35:59.497: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:35:59.497: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:35:59.497: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:35:59.497: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:35:59.497: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:35:59.497: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:35:59.497: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:35:59.497: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.497: INFO: 	Container metrics-server ready: true, restart count 0
W0306 03:35:59.500332      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:35:59.524: INFO: 
Latency metrics for node worker01
Mar  6 03:35:59.524: INFO: 
Logging node info for node worker02
Mar  6 03:35:59.526: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 20070 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:35:59.526: INFO: 
Logging kubelet events for node worker02
Mar  6 03:35:59.530: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:35:59.544: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:35:59.544: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:35:59.544: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:35:59.544: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:35:59.544: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:35:59.544: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:35:59.544: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:35:59.544: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:35:59.544: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:35:59.544: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:35:59.544: INFO: sample-crd-conversion-webhook-deployment-78dcf5dd84-fwvw5 started at 2020-03-06 03:34:18 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:35:59.544: INFO: 	Container sample-crd-conversion-webhook ready: true, restart count 0
W0306 03:35:59.546875      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:35:59.570: INFO: 
Latency metrics for node worker02
Mar  6 03:35:59.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2895" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• Failure [101.509 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:35:58.846: Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:493
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":134,"skipped":2471,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:35:59.677: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-1450
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar  6 03:35:59.862: INFO: Waiting up to 5m0s for pod "pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6" in namespace "emptydir-1450" to be "success or failure"
Mar  6 03:35:59.878: INFO: Pod "pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.466495ms
Mar  6 03:36:01.880: INFO: Pod "pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017954059s
STEP: Saw pod success
Mar  6 03:36:01.880: INFO: Pod "pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6" satisfied condition "success or failure"
Mar  6 03:36:01.882: INFO: Trying to get logs from node worker02 pod pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6 container test-container: 
STEP: delete the pod
Mar  6 03:36:01.896: INFO: Waiting for pod pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6 to disappear
Mar  6 03:36:01.898: INFO: Pod pod-dbe69573-ee7d-4d43-8f83-1f88c5cddbc6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:36:01.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1450" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2488,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:36:01.906: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename svc-latency
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svc-latency-9796
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:36:02.036: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9796
I0306 03:36:02.044497      19 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9796, replica count: 1
I0306 03:36:03.094728      19 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0306 03:36:04.094861      19 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar  6 03:36:04.213: INFO: Created: latency-svc-rw4qn
Mar  6 03:36:04.221: INFO: Got endpoints: latency-svc-rw4qn [26.355266ms]
Mar  6 03:36:04.240: INFO: Created: latency-svc-c4tt6
Mar  6 03:36:04.253: INFO: Got endpoints: latency-svc-c4tt6 [32.292566ms]
Mar  6 03:36:04.256: INFO: Created: latency-svc-jtzfq
Mar  6 03:36:04.268: INFO: Got endpoints: latency-svc-jtzfq [47.171504ms]
Mar  6 03:36:04.276: INFO: Created: latency-svc-jltt5
Mar  6 03:36:04.283: INFO: Got endpoints: latency-svc-jltt5 [61.285242ms]
Mar  6 03:36:04.295: INFO: Created: latency-svc-xbg5c
Mar  6 03:36:04.307: INFO: Got endpoints: latency-svc-xbg5c [84.847375ms]
Mar  6 03:36:04.315: INFO: Created: latency-svc-6mcqk
Mar  6 03:36:04.326: INFO: Got endpoints: latency-svc-6mcqk [103.974153ms]
Mar  6 03:36:04.333: INFO: Created: latency-svc-dnxdf
Mar  6 03:36:04.369: INFO: Got endpoints: latency-svc-dnxdf [147.445786ms]
Mar  6 03:36:04.426: INFO: Created: latency-svc-bn62t
Mar  6 03:36:04.471: INFO: Created: latency-svc-lwrwq
Mar  6 03:36:04.498: INFO: Got endpoints: latency-svc-bn62t [276.411336ms]
Mar  6 03:36:04.498: INFO: Got endpoints: latency-svc-lwrwq [275.704828ms]
Mar  6 03:36:04.507: INFO: Created: latency-svc-7xzkz
Mar  6 03:36:04.512: INFO: Got endpoints: latency-svc-7xzkz [289.3897ms]
Mar  6 03:36:04.524: INFO: Created: latency-svc-mbzfd
Mar  6 03:36:04.527: INFO: Got endpoints: latency-svc-mbzfd [305.525192ms]
Mar  6 03:36:04.563: INFO: Created: latency-svc-5lx9x
Mar  6 03:36:04.569: INFO: Got endpoints: latency-svc-5lx9x [347.045441ms]
Mar  6 03:36:04.579: INFO: Created: latency-svc-xsfts
Mar  6 03:36:04.584: INFO: Got endpoints: latency-svc-xsfts [361.807646ms]
Mar  6 03:36:04.621: INFO: Created: latency-svc-rlx99
Mar  6 03:36:04.625: INFO: Created: latency-svc-pksjm
Mar  6 03:36:04.625: INFO: Got endpoints: latency-svc-pksjm [403.151989ms]
Mar  6 03:36:04.632: INFO: Got endpoints: latency-svc-rlx99 [410.125399ms]
Mar  6 03:36:04.649: INFO: Created: latency-svc-z8lsr
Mar  6 03:36:04.661: INFO: Got endpoints: latency-svc-z8lsr [438.955641ms]
Mar  6 03:36:04.666: INFO: Created: latency-svc-j5thz
Mar  6 03:36:04.674: INFO: Got endpoints: latency-svc-j5thz [420.538102ms]
Mar  6 03:36:04.676: INFO: Created: latency-svc-zdl28
Mar  6 03:36:04.689: INFO: Got endpoints: latency-svc-zdl28 [421.089883ms]
Mar  6 03:36:04.710: INFO: Created: latency-svc-kbm5j
Mar  6 03:36:04.728: INFO: Got endpoints: latency-svc-kbm5j [445.684276ms]
Mar  6 03:36:04.732: INFO: Created: latency-svc-cpnb7
Mar  6 03:36:04.744: INFO: Got endpoints: latency-svc-cpnb7 [437.182687ms]
Mar  6 03:36:04.756: INFO: Created: latency-svc-8zk87
Mar  6 03:36:04.771: INFO: Created: latency-svc-2xc9t
Mar  6 03:36:04.775: INFO: Got endpoints: latency-svc-8zk87 [448.874195ms]
Mar  6 03:36:04.785: INFO: Got endpoints: latency-svc-2xc9t [416.047779ms]
Mar  6 03:36:04.793: INFO: Created: latency-svc-2b294
Mar  6 03:36:04.806: INFO: Got endpoints: latency-svc-2b294 [307.780342ms]
Mar  6 03:36:04.815: INFO: Created: latency-svc-g8bbk
Mar  6 03:36:04.818: INFO: Got endpoints: latency-svc-g8bbk [319.981706ms]
Mar  6 03:36:04.831: INFO: Created: latency-svc-ln98g
Mar  6 03:36:04.840: INFO: Got endpoints: latency-svc-ln98g [328.0715ms]
Mar  6 03:36:04.847: INFO: Created: latency-svc-4g8jr
Mar  6 03:36:04.858: INFO: Got endpoints: latency-svc-4g8jr [330.693826ms]
Mar  6 03:36:04.872: INFO: Created: latency-svc-94nzg
Mar  6 03:36:04.892: INFO: Got endpoints: latency-svc-94nzg [322.821065ms]
Mar  6 03:36:04.903: INFO: Created: latency-svc-bfk4g
Mar  6 03:36:04.920: INFO: Got endpoints: latency-svc-bfk4g [336.132789ms]
Mar  6 03:36:04.962: INFO: Created: latency-svc-bcl72
Mar  6 03:36:04.965: INFO: Got endpoints: latency-svc-bcl72 [339.54353ms]
Mar  6 03:36:04.980: INFO: Created: latency-svc-7xbxf
Mar  6 03:36:04.987: INFO: Got endpoints: latency-svc-7xbxf [354.72609ms]
Mar  6 03:36:04.991: INFO: Created: latency-svc-kr97l
Mar  6 03:36:05.002: INFO: Got endpoints: latency-svc-kr97l [340.632722ms]
Mar  6 03:36:05.011: INFO: Created: latency-svc-q76ng
Mar  6 03:36:05.027: INFO: Got endpoints: latency-svc-q76ng [353.047077ms]
Mar  6 03:36:05.041: INFO: Created: latency-svc-h89gt
Mar  6 03:36:05.055: INFO: Got endpoints: latency-svc-h89gt [365.864373ms]
Mar  6 03:36:05.071: INFO: Created: latency-svc-q5df6
Mar  6 03:36:05.078: INFO: Got endpoints: latency-svc-q5df6 [349.966228ms]
Mar  6 03:36:05.091: INFO: Created: latency-svc-d52m7
Mar  6 03:36:05.110: INFO: Got endpoints: latency-svc-d52m7 [366.290366ms]
Mar  6 03:36:05.121: INFO: Created: latency-svc-srn79
Mar  6 03:36:05.137: INFO: Got endpoints: latency-svc-srn79 [362.07655ms]
Mar  6 03:36:05.137: INFO: Created: latency-svc-kclcn
Mar  6 03:36:05.146: INFO: Got endpoints: latency-svc-kclcn [360.529667ms]
Mar  6 03:36:05.161: INFO: Created: latency-svc-q6lcc
Mar  6 03:36:05.181: INFO: Got endpoints: latency-svc-q6lcc [375.451712ms]
Mar  6 03:36:05.187: INFO: Created: latency-svc-qmf8s
Mar  6 03:36:05.191: INFO: Got endpoints: latency-svc-qmf8s [373.117377ms]
Mar  6 03:36:05.199: INFO: Created: latency-svc-f62g6
Mar  6 03:36:05.209: INFO: Got endpoints: latency-svc-f62g6 [369.228993ms]
Mar  6 03:36:05.225: INFO: Created: latency-svc-wck8p
Mar  6 03:36:05.232: INFO: Got endpoints: latency-svc-wck8p [373.881353ms]
Mar  6 03:36:05.247: INFO: Created: latency-svc-6sggj
Mar  6 03:36:05.259: INFO: Got endpoints: latency-svc-6sggj [366.836742ms]
Mar  6 03:36:05.267: INFO: Created: latency-svc-f6gzd
Mar  6 03:36:05.272: INFO: Got endpoints: latency-svc-f6gzd [351.988801ms]
Mar  6 03:36:05.287: INFO: Created: latency-svc-kmp6h
Mar  6 03:36:05.290: INFO: Got endpoints: latency-svc-kmp6h [324.724462ms]
Mar  6 03:36:05.300: INFO: Created: latency-svc-7mt74
Mar  6 03:36:05.311: INFO: Got endpoints: latency-svc-7mt74 [324.453084ms]
Mar  6 03:36:05.322: INFO: Created: latency-svc-zfqkx
Mar  6 03:36:05.330: INFO: Got endpoints: latency-svc-zfqkx [328.533998ms]
Mar  6 03:36:05.332: INFO: Created: latency-svc-bhfg9
Mar  6 03:36:05.339: INFO: Got endpoints: latency-svc-bhfg9 [312.512697ms]
Mar  6 03:36:05.353: INFO: Created: latency-svc-r9nwz
Mar  6 03:36:05.365: INFO: Got endpoints: latency-svc-r9nwz [309.986687ms]
Mar  6 03:36:05.375: INFO: Created: latency-svc-vhmpr
Mar  6 03:36:05.379: INFO: Got endpoints: latency-svc-vhmpr [300.269695ms]
Mar  6 03:36:05.390: INFO: Created: latency-svc-frrmp
Mar  6 03:36:05.406: INFO: Got endpoints: latency-svc-frrmp [296.385729ms]
Mar  6 03:36:05.420: INFO: Created: latency-svc-xddn6
Mar  6 03:36:05.432: INFO: Got endpoints: latency-svc-xddn6 [294.882099ms]
Mar  6 03:36:05.436: INFO: Created: latency-svc-gbn2p
Mar  6 03:36:05.442: INFO: Got endpoints: latency-svc-gbn2p [296.498319ms]
Mar  6 03:36:05.456: INFO: Created: latency-svc-rr5nv
Mar  6 03:36:05.471: INFO: Got endpoints: latency-svc-rr5nv [289.828276ms]
Mar  6 03:36:05.471: INFO: Created: latency-svc-pbwkz
Mar  6 03:36:05.483: INFO: Got endpoints: latency-svc-pbwkz [291.617677ms]
Mar  6 03:36:05.486: INFO: Created: latency-svc-x26z2
Mar  6 03:36:05.498: INFO: Got endpoints: latency-svc-x26z2 [288.835228ms]
Mar  6 03:36:05.501: INFO: Created: latency-svc-6jxn9
Mar  6 03:36:05.530: INFO: Got endpoints: latency-svc-6jxn9 [298.41003ms]
Mar  6 03:36:05.537: INFO: Created: latency-svc-zhwmz
Mar  6 03:36:05.557: INFO: Created: latency-svc-b8jh9
Mar  6 03:36:05.570: INFO: Got endpoints: latency-svc-zhwmz [311.4766ms]
Mar  6 03:36:05.571: INFO: Created: latency-svc-dglp8
Mar  6 03:36:05.582: INFO: Created: latency-svc-srtd5
Mar  6 03:36:05.593: INFO: Created: latency-svc-5x5tj
Mar  6 03:36:05.624: INFO: Got endpoints: latency-svc-b8jh9 [351.546248ms]
Mar  6 03:36:05.631: INFO: Created: latency-svc-mj7mg
Mar  6 03:36:05.645: INFO: Created: latency-svc-znsk8
Mar  6 03:36:05.665: INFO: Created: latency-svc-t9wtq
Mar  6 03:36:05.672: INFO: Got endpoints: latency-svc-dglp8 [382.379754ms]
Mar  6 03:36:05.680: INFO: Created: latency-svc-xbp9l
Mar  6 03:36:05.688: INFO: Created: latency-svc-l6p6k
Mar  6 03:36:05.700: INFO: Created: latency-svc-xn4rp
Mar  6 03:36:05.719: INFO: Created: latency-svc-729wz
Mar  6 03:36:05.732: INFO: Got endpoints: latency-svc-srtd5 [420.704301ms]
Mar  6 03:36:05.746: INFO: Created: latency-svc-nlms8
Mar  6 03:36:05.770: INFO: Got endpoints: latency-svc-5x5tj [440.032455ms]
Mar  6 03:36:05.784: INFO: Created: latency-svc-k5q89
Mar  6 03:36:05.796: INFO: Created: latency-svc-j5lc4
Mar  6 03:36:05.814: INFO: Created: latency-svc-6zx9h
Mar  6 03:36:05.826: INFO: Got endpoints: latency-svc-mj7mg [486.191446ms]
Mar  6 03:36:05.848: INFO: Created: latency-svc-mvjvs
Mar  6 03:36:05.855: INFO: Created: latency-svc-kll8c
Mar  6 03:36:05.879: INFO: Got endpoints: latency-svc-znsk8 [513.559537ms]
Mar  6 03:36:05.879: INFO: Created: latency-svc-mwbjb
Mar  6 03:36:05.891: INFO: Created: latency-svc-trtj5
Mar  6 03:36:05.912: INFO: Created: latency-svc-d7kmr
Mar  6 03:36:05.923: INFO: Got endpoints: latency-svc-t9wtq [544.553326ms]
Mar  6 03:36:05.932: INFO: Created: latency-svc-n9pgp
Mar  6 03:36:05.951: INFO: Created: latency-svc-wfrl8
Mar  6 03:36:05.968: INFO: Got endpoints: latency-svc-xbp9l [561.462801ms]
Mar  6 03:36:05.989: INFO: Created: latency-svc-vftnw
Mar  6 03:36:06.021: INFO: Got endpoints: latency-svc-l6p6k [588.99433ms]
Mar  6 03:36:06.041: INFO: Created: latency-svc-l8vcl
Mar  6 03:36:06.069: INFO: Got endpoints: latency-svc-xn4rp [626.976619ms]
Mar  6 03:36:06.088: INFO: Created: latency-svc-tfzdx
Mar  6 03:36:06.121: INFO: Got endpoints: latency-svc-729wz [649.755332ms]
Mar  6 03:36:06.137: INFO: Created: latency-svc-9dhn7
Mar  6 03:36:06.181: INFO: Got endpoints: latency-svc-nlms8 [697.685992ms]
Mar  6 03:36:06.200: INFO: Created: latency-svc-vfg6c
Mar  6 03:36:06.222: INFO: Got endpoints: latency-svc-k5q89 [724.115786ms]
Mar  6 03:36:06.239: INFO: Created: latency-svc-8667h
Mar  6 03:36:06.269: INFO: Got endpoints: latency-svc-j5lc4 [738.229949ms]
Mar  6 03:36:06.284: INFO: Created: latency-svc-wrxlt
Mar  6 03:36:06.324: INFO: Got endpoints: latency-svc-6zx9h [753.50366ms]
Mar  6 03:36:06.345: INFO: Created: latency-svc-kk7ml
Mar  6 03:36:06.368: INFO: Got endpoints: latency-svc-mvjvs [743.961534ms]
Mar  6 03:36:06.391: INFO: Created: latency-svc-dzp5b
Mar  6 03:36:06.421: INFO: Got endpoints: latency-svc-kll8c [748.930082ms]
Mar  6 03:36:06.442: INFO: Created: latency-svc-rbzs2
Mar  6 03:36:06.469: INFO: Got endpoints: latency-svc-mwbjb [736.477425ms]
Mar  6 03:36:06.485: INFO: Created: latency-svc-jg2xd
Mar  6 03:36:06.519: INFO: Got endpoints: latency-svc-trtj5 [748.78565ms]
Mar  6 03:36:06.535: INFO: Created: latency-svc-j8np9
Mar  6 03:36:06.570: INFO: Got endpoints: latency-svc-d7kmr [744.315354ms]
Mar  6 03:36:06.589: INFO: Created: latency-svc-8c57p
Mar  6 03:36:06.623: INFO: Got endpoints: latency-svc-n9pgp [743.984103ms]
Mar  6 03:36:06.637: INFO: Created: latency-svc-pvbdg
Mar  6 03:36:06.670: INFO: Got endpoints: latency-svc-wfrl8 [746.657347ms]
Mar  6 03:36:06.684: INFO: Created: latency-svc-8llpz
Mar  6 03:36:06.721: INFO: Got endpoints: latency-svc-vftnw [753.425009ms]
Mar  6 03:36:06.741: INFO: Created: latency-svc-6p6pm
Mar  6 03:36:06.768: INFO: Got endpoints: latency-svc-l8vcl [746.948749ms]
Mar  6 03:36:06.782: INFO: Created: latency-svc-9wcxl
Mar  6 03:36:06.824: INFO: Got endpoints: latency-svc-tfzdx [754.793176ms]
Mar  6 03:36:06.845: INFO: Created: latency-svc-v47v9
Mar  6 03:36:06.869: INFO: Got endpoints: latency-svc-9dhn7 [748.143847ms]
Mar  6 03:36:06.888: INFO: Created: latency-svc-kcsrv
Mar  6 03:36:06.922: INFO: Got endpoints: latency-svc-vfg6c [741.419551ms]
Mar  6 03:36:06.938: INFO: Created: latency-svc-c5889
Mar  6 03:36:06.969: INFO: Got endpoints: latency-svc-8667h [746.861876ms]
Mar  6 03:36:07.054: INFO: Got endpoints: latency-svc-wrxlt [785.718885ms]
Mar  6 03:36:07.056: INFO: Created: latency-svc-jnfs4
Mar  6 03:36:07.078: INFO: Got endpoints: latency-svc-kk7ml [753.963352ms]
Mar  6 03:36:07.121: INFO: Created: latency-svc-xzcxx
Mar  6 03:36:07.126: INFO: Got endpoints: latency-svc-dzp5b [758.533893ms]
Mar  6 03:36:07.149: INFO: Created: latency-svc-kn479
Mar  6 03:36:07.166: INFO: Created: latency-svc-vzzkr
Mar  6 03:36:07.174: INFO: Got endpoints: latency-svc-rbzs2 [752.65711ms]
Mar  6 03:36:07.188: INFO: Created: latency-svc-xt6gh
Mar  6 03:36:07.221: INFO: Got endpoints: latency-svc-jg2xd [751.977102ms]
Mar  6 03:36:07.237: INFO: Created: latency-svc-fpdzv
Mar  6 03:36:07.270: INFO: Got endpoints: latency-svc-j8np9 [750.584859ms]
Mar  6 03:36:07.283: INFO: Created: latency-svc-9frq6
Mar  6 03:36:07.325: INFO: Got endpoints: latency-svc-8c57p [754.654457ms]
Mar  6 03:36:07.349: INFO: Created: latency-svc-tz5zz
Mar  6 03:36:07.373: INFO: Got endpoints: latency-svc-pvbdg [750.074583ms]
Mar  6 03:36:07.393: INFO: Created: latency-svc-vfgb2
Mar  6 03:36:07.418: INFO: Got endpoints: latency-svc-8llpz [748.582529ms]
Mar  6 03:36:07.435: INFO: Created: latency-svc-bhp4z
Mar  6 03:36:07.471: INFO: Got endpoints: latency-svc-6p6pm [749.49797ms]
Mar  6 03:36:07.487: INFO: Created: latency-svc-9jqn9
Mar  6 03:36:07.522: INFO: Got endpoints: latency-svc-9wcxl [754.81753ms]
Mar  6 03:36:07.538: INFO: Created: latency-svc-j65jt
Mar  6 03:36:07.569: INFO: Got endpoints: latency-svc-v47v9 [744.784388ms]
Mar  6 03:36:07.584: INFO: Created: latency-svc-n5w9f
Mar  6 03:36:07.622: INFO: Got endpoints: latency-svc-kcsrv [752.884475ms]
Mar  6 03:36:07.642: INFO: Created: latency-svc-dhpjn
Mar  6 03:36:07.668: INFO: Got endpoints: latency-svc-c5889 [746.0179ms]
Mar  6 03:36:07.682: INFO: Created: latency-svc-r2845
Mar  6 03:36:07.720: INFO: Got endpoints: latency-svc-jnfs4 [751.458411ms]
Mar  6 03:36:07.738: INFO: Created: latency-svc-w5hds
Mar  6 03:36:07.768: INFO: Got endpoints: latency-svc-xzcxx [713.852898ms]
Mar  6 03:36:07.785: INFO: Created: latency-svc-khpz9
Mar  6 03:36:07.823: INFO: Got endpoints: latency-svc-kn479 [744.897778ms]
Mar  6 03:36:07.843: INFO: Created: latency-svc-rcs8d
Mar  6 03:36:07.867: INFO: Got endpoints: latency-svc-vzzkr [740.867244ms]
Mar  6 03:36:07.892: INFO: Created: latency-svc-wd82n
Mar  6 03:36:07.923: INFO: Got endpoints: latency-svc-xt6gh [749.728427ms]
Mar  6 03:36:07.938: INFO: Created: latency-svc-9r7vg
Mar  6 03:36:07.968: INFO: Got endpoints: latency-svc-fpdzv [747.106778ms]
Mar  6 03:36:07.983: INFO: Created: latency-svc-hs7k9
Mar  6 03:36:08.024: INFO: Got endpoints: latency-svc-9frq6 [754.442571ms]
Mar  6 03:36:08.040: INFO: Created: latency-svc-5vnhl
Mar  6 03:36:08.069: INFO: Got endpoints: latency-svc-tz5zz [744.713946ms]
Mar  6 03:36:08.087: INFO: Created: latency-svc-bz758
Mar  6 03:36:08.121: INFO: Got endpoints: latency-svc-vfgb2 [747.913912ms]
Mar  6 03:36:08.140: INFO: Created: latency-svc-n2fz8
Mar  6 03:36:08.168: INFO: Got endpoints: latency-svc-bhp4z [749.561993ms]
Mar  6 03:36:08.183: INFO: Created: latency-svc-6c2ws
Mar  6 03:36:08.224: INFO: Got endpoints: latency-svc-9jqn9 [753.271798ms]
Mar  6 03:36:08.243: INFO: Created: latency-svc-t27jv
Mar  6 03:36:08.268: INFO: Got endpoints: latency-svc-j65jt [745.870941ms]
Mar  6 03:36:08.282: INFO: Created: latency-svc-qn7sr
Mar  6 03:36:08.321: INFO: Got endpoints: latency-svc-n5w9f [752.235352ms]
Mar  6 03:36:08.337: INFO: Created: latency-svc-v9xj5
Mar  6 03:36:08.372: INFO: Got endpoints: latency-svc-dhpjn [750.269835ms]
Mar  6 03:36:08.387: INFO: Created: latency-svc-hrcck
Mar  6 03:36:08.420: INFO: Got endpoints: latency-svc-r2845 [752.032496ms]
Mar  6 03:36:08.438: INFO: Created: latency-svc-mf7b9
Mar  6 03:36:08.468: INFO: Got endpoints: latency-svc-w5hds [747.294892ms]
Mar  6 03:36:08.499: INFO: Created: latency-svc-789w7
Mar  6 03:36:08.521: INFO: Got endpoints: latency-svc-khpz9 [752.228973ms]
Mar  6 03:36:08.562: INFO: Created: latency-svc-n68ng
Mar  6 03:36:08.568: INFO: Got endpoints: latency-svc-rcs8d [745.117803ms]
Mar  6 03:36:08.604: INFO: Created: latency-svc-thtq2
Mar  6 03:36:08.624: INFO: Got endpoints: latency-svc-wd82n [756.549881ms]
Mar  6 03:36:08.639: INFO: Created: latency-svc-k9zj5
Mar  6 03:36:08.668: INFO: Got endpoints: latency-svc-9r7vg [744.47375ms]
Mar  6 03:36:08.687: INFO: Created: latency-svc-7r86l
Mar  6 03:36:08.721: INFO: Got endpoints: latency-svc-hs7k9 [752.825219ms]
Mar  6 03:36:08.735: INFO: Created: latency-svc-ls6rl
Mar  6 03:36:08.773: INFO: Got endpoints: latency-svc-5vnhl [748.627999ms]
Mar  6 03:36:08.786: INFO: Created: latency-svc-6fhcx
Mar  6 03:36:08.822: INFO: Got endpoints: latency-svc-bz758 [752.183077ms]
Mar  6 03:36:08.846: INFO: Created: latency-svc-82k4b
Mar  6 03:36:08.870: INFO: Got endpoints: latency-svc-n2fz8 [749.020697ms]
Mar  6 03:36:08.893: INFO: Created: latency-svc-dstsm
Mar  6 03:36:08.921: INFO: Got endpoints: latency-svc-6c2ws [753.095396ms]
Mar  6 03:36:08.944: INFO: Created: latency-svc-kzd8z
Mar  6 03:36:08.968: INFO: Got endpoints: latency-svc-t27jv [743.575115ms]
Mar  6 03:36:08.984: INFO: Created: latency-svc-lm4nl
Mar  6 03:36:09.021: INFO: Got endpoints: latency-svc-qn7sr [752.30664ms]
Mar  6 03:36:09.039: INFO: Created: latency-svc-r7xkd
Mar  6 03:36:09.071: INFO: Got endpoints: latency-svc-v9xj5 [749.609486ms]
Mar  6 03:36:09.094: INFO: Created: latency-svc-kjbhx
Mar  6 03:36:09.122: INFO: Got endpoints: latency-svc-hrcck [749.378236ms]
Mar  6 03:36:09.145: INFO: Created: latency-svc-qdbtd
Mar  6 03:36:09.168: INFO: Got endpoints: latency-svc-mf7b9 [747.860448ms]
Mar  6 03:36:09.182: INFO: Created: latency-svc-fdgpg
Mar  6 03:36:09.220: INFO: Got endpoints: latency-svc-789w7 [752.080547ms]
Mar  6 03:36:09.234: INFO: Created: latency-svc-kw5j5
Mar  6 03:36:09.268: INFO: Got endpoints: latency-svc-n68ng [747.166234ms]
Mar  6 03:36:09.286: INFO: Created: latency-svc-cg4w9
Mar  6 03:36:09.319: INFO: Got endpoints: latency-svc-thtq2 [751.58916ms]
Mar  6 03:36:09.338: INFO: Created: latency-svc-cdzhg
Mar  6 03:36:09.372: INFO: Got endpoints: latency-svc-k9zj5 [747.999996ms]
Mar  6 03:36:09.392: INFO: Created: latency-svc-brjd9
Mar  6 03:36:09.430: INFO: Got endpoints: latency-svc-7r86l [761.785927ms]
Mar  6 03:36:09.446: INFO: Created: latency-svc-6kxwh
Mar  6 03:36:09.468: INFO: Got endpoints: latency-svc-ls6rl [747.744268ms]
Mar  6 03:36:09.484: INFO: Created: latency-svc-7t4gq
Mar  6 03:36:09.521: INFO: Got endpoints: latency-svc-6fhcx [747.812258ms]
Mar  6 03:36:09.533: INFO: Created: latency-svc-cmfcj
Mar  6 03:36:09.572: INFO: Got endpoints: latency-svc-82k4b [750.042899ms]
Mar  6 03:36:09.590: INFO: Created: latency-svc-wsczb
Mar  6 03:36:09.622: INFO: Got endpoints: latency-svc-dstsm [752.264121ms]
Mar  6 03:36:09.639: INFO: Created: latency-svc-7xw27
Mar  6 03:36:09.669: INFO: Got endpoints: latency-svc-kzd8z [747.855149ms]
Mar  6 03:36:09.682: INFO: Created: latency-svc-gxrt2
Mar  6 03:36:09.724: INFO: Got endpoints: latency-svc-lm4nl [756.044048ms]
Mar  6 03:36:09.741: INFO: Created: latency-svc-2phk8
Mar  6 03:36:09.770: INFO: Got endpoints: latency-svc-r7xkd [749.427485ms]
Mar  6 03:36:09.784: INFO: Created: latency-svc-lb2xl
Mar  6 03:36:09.823: INFO: Got endpoints: latency-svc-kjbhx [752.225017ms]
Mar  6 03:36:09.847: INFO: Created: latency-svc-bv24l
Mar  6 03:36:09.869: INFO: Got endpoints: latency-svc-qdbtd [747.489747ms]
Mar  6 03:36:09.887: INFO: Created: latency-svc-xzf6h
Mar  6 03:36:09.920: INFO: Got endpoints: latency-svc-fdgpg [752.204263ms]
Mar  6 03:36:09.969: INFO: Created: latency-svc-lkjmv
Mar  6 03:36:09.972: INFO: Got endpoints: latency-svc-kw5j5 [751.904458ms]
Mar  6 03:36:09.989: INFO: Created: latency-svc-gvf6m
Mar  6 03:36:10.021: INFO: Got endpoints: latency-svc-cg4w9 [753.449729ms]
Mar  6 03:36:10.040: INFO: Created: latency-svc-4h6kx
Mar  6 03:36:10.069: INFO: Got endpoints: latency-svc-cdzhg [749.635076ms]
Mar  6 03:36:10.087: INFO: Created: latency-svc-wbz9s
Mar  6 03:36:10.120: INFO: Got endpoints: latency-svc-brjd9 [748.508059ms]
Mar  6 03:36:10.135: INFO: Created: latency-svc-kvfvn
Mar  6 03:36:10.168: INFO: Got endpoints: latency-svc-6kxwh [737.951915ms]
Mar  6 03:36:10.207: INFO: Created: latency-svc-s4f5z
Mar  6 03:36:10.222: INFO: Got endpoints: latency-svc-7t4gq [753.926774ms]
Mar  6 03:36:10.239: INFO: Created: latency-svc-2rmxf
Mar  6 03:36:10.268: INFO: Got endpoints: latency-svc-cmfcj [747.606205ms]
Mar  6 03:36:10.299: INFO: Created: latency-svc-kdk9b
Mar  6 03:36:10.320: INFO: Got endpoints: latency-svc-wsczb [748.399002ms]
Mar  6 03:36:10.338: INFO: Created: latency-svc-7fd29
Mar  6 03:36:10.369: INFO: Got endpoints: latency-svc-7xw27 [746.945344ms]
Mar  6 03:36:10.386: INFO: Created: latency-svc-9vnbw
Mar  6 03:36:10.421: INFO: Got endpoints: latency-svc-gxrt2 [751.917688ms]
Mar  6 03:36:10.441: INFO: Created: latency-svc-k2n98
Mar  6 03:36:10.473: INFO: Got endpoints: latency-svc-2phk8 [748.729867ms]
Mar  6 03:36:10.488: INFO: Created: latency-svc-d9z7l
Mar  6 03:36:10.521: INFO: Got endpoints: latency-svc-lb2xl [751.285871ms]
Mar  6 03:36:10.540: INFO: Created: latency-svc-zlkt9
Mar  6 03:36:10.569: INFO: Got endpoints: latency-svc-bv24l [746.197134ms]
Mar  6 03:36:10.582: INFO: Created: latency-svc-jrc28
Mar  6 03:36:10.620: INFO: Got endpoints: latency-svc-xzf6h [750.841418ms]
Mar  6 03:36:10.636: INFO: Created: latency-svc-rs59q
Mar  6 03:36:10.668: INFO: Got endpoints: latency-svc-lkjmv [748.111898ms]
Mar  6 03:36:10.684: INFO: Created: latency-svc-tnkt4
Mar  6 03:36:10.719: INFO: Got endpoints: latency-svc-gvf6m [747.602001ms]
Mar  6 03:36:10.735: INFO: Created: latency-svc-g4v2m
Mar  6 03:36:10.769: INFO: Got endpoints: latency-svc-4h6kx [747.691409ms]
Mar  6 03:36:10.782: INFO: Created: latency-svc-c45ff
Mar  6 03:36:10.822: INFO: Got endpoints: latency-svc-wbz9s [752.644185ms]
Mar  6 03:36:10.834: INFO: Created: latency-svc-hcb6n
Mar  6 03:36:10.872: INFO: Got endpoints: latency-svc-kvfvn [751.583294ms]
Mar  6 03:36:10.886: INFO: Created: latency-svc-xvz8v
Mar  6 03:36:10.922: INFO: Got endpoints: latency-svc-s4f5z [753.96664ms]
Mar  6 03:36:10.942: INFO: Created: latency-svc-svxnc
Mar  6 03:36:10.970: INFO: Got endpoints: latency-svc-2rmxf [748.069379ms]
Mar  6 03:36:10.985: INFO: Created: latency-svc-t75d9
Mar  6 03:36:11.022: INFO: Got endpoints: latency-svc-kdk9b [753.561207ms]
Mar  6 03:36:11.042: INFO: Created: latency-svc-fnd2n
Mar  6 03:36:11.074: INFO: Got endpoints: latency-svc-7fd29 [753.432861ms]
Mar  6 03:36:11.091: INFO: Created: latency-svc-vdf47
Mar  6 03:36:11.124: INFO: Got endpoints: latency-svc-9vnbw [754.633486ms]
Mar  6 03:36:11.175: INFO: Got endpoints: latency-svc-k2n98 [754.180725ms]
Mar  6 03:36:11.178: INFO: Created: latency-svc-57gj2
Mar  6 03:36:11.194: INFO: Created: latency-svc-nxdfx
Mar  6 03:36:11.220: INFO: Got endpoints: latency-svc-d9z7l [747.431294ms]
Mar  6 03:36:11.236: INFO: Created: latency-svc-qln8w
Mar  6 03:36:11.268: INFO: Got endpoints: latency-svc-zlkt9 [746.88298ms]
Mar  6 03:36:11.281: INFO: Created: latency-svc-9hhm6
Mar  6 03:36:11.321: INFO: Got endpoints: latency-svc-jrc28 [751.937076ms]
Mar  6 03:36:11.342: INFO: Created: latency-svc-fb4vx
Mar  6 03:36:11.372: INFO: Got endpoints: latency-svc-rs59q [752.484221ms]
Mar  6 03:36:11.389: INFO: Created: latency-svc-jn882
Mar  6 03:36:11.420: INFO: Got endpoints: latency-svc-tnkt4 [751.858687ms]
Mar  6 03:36:11.436: INFO: Created: latency-svc-czgcw
Mar  6 03:36:11.470: INFO: Got endpoints: latency-svc-g4v2m [750.603973ms]
Mar  6 03:36:11.494: INFO: Created: latency-svc-zv5cl
Mar  6 03:36:11.526: INFO: Got endpoints: latency-svc-c45ff [756.88944ms]
Mar  6 03:36:11.548: INFO: Created: latency-svc-znjxq
Mar  6 03:36:11.568: INFO: Got endpoints: latency-svc-hcb6n [746.270111ms]
Mar  6 03:36:11.582: INFO: Created: latency-svc-q8xk2
Mar  6 03:36:11.623: INFO: Got endpoints: latency-svc-xvz8v [751.403672ms]
Mar  6 03:36:11.645: INFO: Created: latency-svc-6b6fc
Mar  6 03:36:11.668: INFO: Got endpoints: latency-svc-svxnc [746.334767ms]
Mar  6 03:36:11.683: INFO: Created: latency-svc-x2rrz
Mar  6 03:36:11.721: INFO: Got endpoints: latency-svc-t75d9 [750.6728ms]
Mar  6 03:36:11.737: INFO: Created: latency-svc-jj8bc
Mar  6 03:36:11.772: INFO: Got endpoints: latency-svc-fnd2n [750.586367ms]
Mar  6 03:36:11.787: INFO: Created: latency-svc-tjsv2
Mar  6 03:36:11.820: INFO: Got endpoints: latency-svc-vdf47 [746.614406ms]
Mar  6 03:36:11.837: INFO: Created: latency-svc-ss7ds
Mar  6 03:36:11.868: INFO: Got endpoints: latency-svc-57gj2 [743.93204ms]
Mar  6 03:36:11.888: INFO: Created: latency-svc-2mbpx
Mar  6 03:36:11.923: INFO: Got endpoints: latency-svc-nxdfx [747.877629ms]
Mar  6 03:36:11.946: INFO: Created: latency-svc-84n45
Mar  6 03:36:11.968: INFO: Got endpoints: latency-svc-qln8w [747.749561ms]
Mar  6 03:36:11.982: INFO: Created: latency-svc-5jls9
Mar  6 03:36:12.024: INFO: Got endpoints: latency-svc-9hhm6 [755.811683ms]
Mar  6 03:36:12.039: INFO: Created: latency-svc-jp8wz
Mar  6 03:36:12.070: INFO: Got endpoints: latency-svc-fb4vx [749.015093ms]
Mar  6 03:36:12.119: INFO: Got endpoints: latency-svc-jn882 [746.555062ms]
Mar  6 03:36:12.169: INFO: Got endpoints: latency-svc-czgcw [748.418942ms]
Mar  6 03:36:12.224: INFO: Got endpoints: latency-svc-zv5cl [754.110982ms]
Mar  6 03:36:12.270: INFO: Got endpoints: latency-svc-znjxq [743.954225ms]
Mar  6 03:36:12.319: INFO: Got endpoints: latency-svc-q8xk2 [751.501485ms]
Mar  6 03:36:12.369: INFO: Got endpoints: latency-svc-6b6fc [745.107147ms]
Mar  6 03:36:12.421: INFO: Got endpoints: latency-svc-x2rrz [752.560361ms]
Mar  6 03:36:12.471: INFO: Got endpoints: latency-svc-jj8bc [749.850903ms]
Mar  6 03:36:12.529: INFO: Got endpoints: latency-svc-tjsv2 [756.285218ms]
Mar  6 03:36:12.569: INFO: Got endpoints: latency-svc-ss7ds [748.339767ms]
Mar  6 03:36:12.621: INFO: Got endpoints: latency-svc-2mbpx [753.1811ms]
Mar  6 03:36:12.669: INFO: Got endpoints: latency-svc-84n45 [745.506904ms]
Mar  6 03:36:12.721: INFO: Got endpoints: latency-svc-5jls9 [753.199831ms]
Mar  6 03:36:12.769: INFO: Got endpoints: latency-svc-jp8wz [744.969723ms]
Mar  6 03:36:12.769: INFO: Latencies: [32.292566ms 47.171504ms 61.285242ms 84.847375ms 103.974153ms 147.445786ms 275.704828ms 276.411336ms 288.835228ms 289.3897ms 289.828276ms 291.617677ms 294.882099ms 296.385729ms 296.498319ms 298.41003ms 300.269695ms 305.525192ms 307.780342ms 309.986687ms 311.4766ms 312.512697ms 319.981706ms 322.821065ms 324.453084ms 324.724462ms 328.0715ms 328.533998ms 330.693826ms 336.132789ms 339.54353ms 340.632722ms 347.045441ms 349.966228ms 351.546248ms 351.988801ms 353.047077ms 354.72609ms 360.529667ms 361.807646ms 362.07655ms 365.864373ms 366.290366ms 366.836742ms 369.228993ms 373.117377ms 373.881353ms 375.451712ms 382.379754ms 403.151989ms 410.125399ms 416.047779ms 420.538102ms 420.704301ms 421.089883ms 437.182687ms 438.955641ms 440.032455ms 445.684276ms 448.874195ms 486.191446ms 513.559537ms 544.553326ms 561.462801ms 588.99433ms 626.976619ms 649.755332ms 697.685992ms 713.852898ms 724.115786ms 736.477425ms 737.951915ms 738.229949ms 740.867244ms 741.419551ms 743.575115ms 743.93204ms 743.954225ms 743.961534ms 743.984103ms 744.315354ms 744.47375ms 744.713946ms 744.784388ms 744.897778ms 744.969723ms 745.107147ms 745.117803ms 745.506904ms 745.870941ms 746.0179ms 746.197134ms 746.270111ms 746.334767ms 746.555062ms 746.614406ms 746.657347ms 746.861876ms 746.88298ms 746.945344ms 746.948749ms 747.106778ms 747.166234ms 747.294892ms 747.431294ms 747.489747ms 747.602001ms 747.606205ms 747.691409ms 747.744268ms 747.749561ms 747.812258ms 747.855149ms 747.860448ms 747.877629ms 747.913912ms 747.999996ms 748.069379ms 748.111898ms 748.143847ms 748.339767ms 748.399002ms 748.418942ms 748.508059ms 748.582529ms 748.627999ms 748.729867ms 748.78565ms 748.930082ms 749.015093ms 749.020697ms 749.378236ms 749.427485ms 749.49797ms 749.561993ms 749.609486ms 749.635076ms 749.728427ms 749.850903ms 750.042899ms 750.074583ms 750.269835ms 750.584859ms 750.586367ms 750.603973ms 750.6728ms 750.841418ms 751.285871ms 751.403672ms 751.458411ms 751.501485ms 751.583294ms 751.58916ms 751.858687ms 751.904458ms 751.917688ms 751.937076ms 751.977102ms 752.032496ms 752.080547ms 752.183077ms 752.204263ms 752.225017ms 752.228973ms 752.235352ms 752.264121ms 752.30664ms 752.484221ms 752.560361ms 752.644185ms 752.65711ms 752.825219ms 752.884475ms 753.095396ms 753.1811ms 753.199831ms 753.271798ms 753.425009ms 753.432861ms 753.449729ms 753.50366ms 753.561207ms 753.926774ms 753.963352ms 753.96664ms 754.110982ms 754.180725ms 754.442571ms 754.633486ms 754.654457ms 754.793176ms 754.81753ms 755.811683ms 756.044048ms 756.285218ms 756.549881ms 756.88944ms 758.533893ms 761.785927ms 785.718885ms]
Mar  6 03:36:12.769: INFO: 50 %ile: 746.948749ms
Mar  6 03:36:12.769: INFO: 90 %ile: 753.50366ms
Mar  6 03:36:12.769: INFO: 99 %ile: 761.785927ms
Mar  6 03:36:12.769: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:36:12.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9796" for this suite.

• [SLOW TEST:10.875 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":136,"skipped":2498,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:36:12.781: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-2013
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:36:24.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2013" for this suite.

• [SLOW TEST:11.258 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":137,"skipped":2512,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:36:24.039: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-291
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar  6 03:36:24.217: INFO: Waiting up to 5m0s for pod "pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3" in namespace "emptydir-291" to be "success or failure"
Mar  6 03:36:24.224: INFO: Pod "pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.340795ms
Mar  6 03:36:26.227: INFO: Pod "pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009564791s
STEP: Saw pod success
Mar  6 03:36:26.227: INFO: Pod "pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3" satisfied condition "success or failure"
Mar  6 03:36:26.228: INFO: Trying to get logs from node worker02 pod pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3 container test-container: 
STEP: delete the pod
Mar  6 03:36:26.242: INFO: Waiting for pod pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3 to disappear
Mar  6 03:36:26.245: INFO: Pod pod-9648d028-d26f-49bb-8f1e-5e91d4c754b3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:36:26.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-291" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2524,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:36:26.253: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-6743
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6743.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.228_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6743.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6743.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6743.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6743.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 228.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.228_udp@PTR;check="$$(dig +tcp +noall +answer +search 228.190.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.190.228_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:36:30.427: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.429: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.432: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.435: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.450: INFO: Unable to read jessie_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.452: INFO: Unable to read jessie_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.454: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.456: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:30.469: INFO: Lookups using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 failed for: [wheezy_udp@dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_udp@dns-test-service.dns-6743.svc.cluster.local jessie_tcp@dns-test-service.dns-6743.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local]

Mar  6 03:36:35.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.474: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.478: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.493: INFO: Unable to read jessie_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.497: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.499: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:35.517: INFO: Lookups using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 failed for: [wheezy_udp@dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_udp@dns-test-service.dns-6743.svc.cluster.local jessie_tcp@dns-test-service.dns-6743.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local]

Mar  6 03:36:40.472: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.476: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.479: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.481: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.498: INFO: Unable to read jessie_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.500: INFO: Unable to read jessie_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.506: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.510: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:40.525: INFO: Lookups using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 failed for: [wheezy_udp@dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_udp@dns-test-service.dns-6743.svc.cluster.local jessie_tcp@dns-test-service.dns-6743.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local]

Mar  6 03:36:45.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.474: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.479: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.493: INFO: Unable to read jessie_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.498: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.500: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:45.523: INFO: Lookups using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 failed for: [wheezy_udp@dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_udp@dns-test-service.dns-6743.svc.cluster.local jessie_tcp@dns-test-service.dns-6743.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local]

Mar  6 03:36:50.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.474: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.476: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.478: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.492: INFO: Unable to read jessie_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.494: INFO: Unable to read jessie_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.496: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.498: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:50.518: INFO: Lookups using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 failed for: [wheezy_udp@dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_udp@dns-test-service.dns-6743.svc.cluster.local jessie_tcp@dns-test-service.dns-6743.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local]

Mar  6 03:36:55.471: INFO: Unable to read wheezy_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.474: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.479: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.481: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.498: INFO: Unable to read jessie_udp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.501: INFO: Unable to read jessie_tcp@dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.504: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.506: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local from pod dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2: the server could not find the requested resource (get pods dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2)
Mar  6 03:36:55.524: INFO: Lookups using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 failed for: [wheezy_udp@dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@dns-test-service.dns-6743.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_udp@dns-test-service.dns-6743.svc.cluster.local jessie_tcp@dns-test-service.dns-6743.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6743.svc.cluster.local]

Mar  6 03:37:00.519: INFO: DNS probes using dns-6743/dns-test-8f2c5421-e82d-4b6d-83da-daf83d1414b2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:37:00.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6743" for this suite.

• [SLOW TEST:34.362 seconds]
[sig-network] DNS
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":139,"skipped":2542,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:37:00.615: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-697
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-3b3667f6-1302-4984-926e-18ede513a3ab
STEP: Creating secret with name secret-projected-all-test-volume-e1c7b29e-8604-4655-8a6d-4092555a551e
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar  6 03:37:00.763: INFO: Waiting up to 5m0s for pod "projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805" in namespace "projected-697" to be "success or failure"
Mar  6 03:37:00.766: INFO: Pod "projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805": Phase="Pending", Reason="", readiness=false. Elapsed: 2.70734ms
Mar  6 03:37:02.768: INFO: Pod "projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005184584s
STEP: Saw pod success
Mar  6 03:37:02.768: INFO: Pod "projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805" satisfied condition "success or failure"
Mar  6 03:37:02.770: INFO: Trying to get logs from node worker02 pod projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805 container projected-all-volume-test: 
STEP: delete the pod
Mar  6 03:37:02.786: INFO: Waiting for pod projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805 to disappear
Mar  6 03:37:02.788: INFO: Pod projected-volume-cc92100b-0280-4b1a-b692-86bd40ba0805 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:37:02.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-697" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2556,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:37:02.795: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9964
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:37:02.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf" in namespace "downward-api-9964" to be "success or failure"
Mar  6 03:37:02.935: INFO: Pod "downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638434ms
Mar  6 03:37:04.937: INFO: Pod "downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005089757s
STEP: Saw pod success
Mar  6 03:37:04.937: INFO: Pod "downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf" satisfied condition "success or failure"
Mar  6 03:37:04.939: INFO: Trying to get logs from node worker02 pod downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf container client-container: 
STEP: delete the pod
Mar  6 03:37:04.963: INFO: Waiting for pod downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf to disappear
Mar  6 03:37:04.967: INFO: Pod downwardapi-volume-829bcbc0-6e06-4c73-92f9-140752abcddf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:37:04.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9964" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2556,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:37:04.980: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3870
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:37:05.790: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:37:08.820: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:37:08.823: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5319-crds.webhook.example.com via the AdmissionRegistration API
Mar  6 03:37:24.356: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:37:34.464: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:37:44.565: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:37:54.666: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:38:04.675: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:38:04.676: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-3870".
STEP: Found 6 events.
Mar  6 03:38:05.187: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xttjj: {default-scheduler } Scheduled: Successfully assigned webhook-3870/sample-webhook-deployment-5f65f8c764-xttjj to worker02
Mar  6 03:38:05.187: INFO: At 2020-03-06 03:37:05 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:38:05.187: INFO: At 2020-03-06 03:37:05 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-xttjj
Mar  6 03:38:05.187: INFO: At 2020-03-06 03:37:06 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xttjj: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:38:05.187: INFO: At 2020-03-06 03:37:06 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xttjj: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:38:05.187: INFO: At 2020-03-06 03:37:06 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xttjj: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:38:05.190: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:38:05.190: INFO: sample-webhook-deployment-5f65f8c764-xttjj  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:37:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:37:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:37:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:37:05 +0000 UTC  }]
Mar  6 03:38:05.191: INFO: 
Mar  6 03:38:05.193: INFO: 
Logging node info for node master01
Mar  6 03:38:05.195: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 19662 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:38:05.196: INFO: 
Logging kubelet events for node master01
Mar  6 03:38:05.201: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:38:05.218: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:38:05.218: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:38:05.218: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:38:05.218: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.218: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:38:05.218: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:38:05.218: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.218: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:38:05.218: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.218: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:38:05.218: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.218: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:38:05.218: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.218: INFO: 	Container kube-scheduler ready: true, restart count 1
W0306 03:38:05.222062      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:38:05.238: INFO: 
Latency metrics for node master01
Mar  6 03:38:05.238: INFO: 
Logging node info for node master02
Mar  6 03:38:05.240: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 19646 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:33:59 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:38:05.240: INFO: 
Logging kubelet events for node master02
Mar  6 03:38:05.244: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:38:05.253: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:38:05.253: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:38:05.253: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:38:05.253: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:38:05.253: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:38:05.253: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:38:05.253: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:38:05.253: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:38:05.253: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.253: INFO: 	Container coredns ready: true, restart count 0
W0306 03:38:05.257369      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:38:05.271: INFO: 
Latency metrics for node master02
Mar  6 03:38:05.271: INFO: 
Logging node info for node master03
Mar  6 03:38:05.275: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 19651 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:00 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:38:05.276: INFO: 
Logging kubelet events for node master03
Mar  6 03:38:05.280: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:38:05.292: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:38:05.292: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:38:05.292: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:38:05.292: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:38:05.292: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:38:05.292: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:38:05.292: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:38:05.292: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:38:05.292: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:38:05.292: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:38:05.292: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.292: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:38:05.295917      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:38:05.332: INFO: 
Latency metrics for node master03
Mar  6 03:38:05.332: INFO: 
Logging node info for node worker01
Mar  6 03:38:05.334: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 19979 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:34:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:38:05.334: INFO: 
Logging kubelet events for node worker01
Mar  6 03:38:05.343: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:38:05.363: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:38:05.363: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:38:05.363: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:38:05.363: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:38:05.363: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:38:05.363: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:38:05.363: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:38:05.363: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:38:05.363: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:38:05.363: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:38:05.363: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:38:05.363: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:38:05.363: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:38:05.363: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:38:05.363: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:38:05.366391      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:38:05.383: INFO: 
Latency metrics for node worker01
Mar  6 03:38:05.383: INFO: 
Logging node info for node worker02
Mar  6 03:38:05.385: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 20070 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:35:24 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:38:05.385: INFO: 
Logging kubelet events for node worker02
Mar  6 03:38:05.391: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:38:05.396: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:38:05.396: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:38:05.396: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:38:05.396: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:38:05.396: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:38:05.396: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:38:05.396: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:38:05.396: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:38:05.396: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:38:05.396: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:38:05.396: INFO: sample-webhook-deployment-5f65f8c764-xttjj started at 2020-03-06 03:37:05 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:38:05.396: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 03:38:05.400711      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:38:05.429: INFO: 
Latency metrics for node worker02
Mar  6 03:38:05.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3870" for this suite.
STEP: Destroying namespace "webhook-3870-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [60.513 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:38:04.676: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":141,"skipped":2584,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:38:05.493: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-3487
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:38:05.658: INFO: Pod name rollover-pod: Found 0 pods out of 1
Mar  6 03:38:10.661: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar  6 03:38:10.661: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Mar  6 03:38:12.663: INFO: Creating deployment "test-rollover-deployment"
Mar  6 03:38:12.669: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Mar  6 03:38:14.674: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Mar  6 03:38:14.680: INFO: Ensure that both replica sets have 1 created replica
Mar  6 03:38:14.686: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Mar  6 03:38:14.691: INFO: Updating deployment test-rollover-deployment
Mar  6 03:38:14.691: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Mar  6 03:38:16.695: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Mar  6 03:38:16.699: INFO: Make sure deployment "test-rollover-deployment" is complete
Mar  6 03:38:16.705: INFO: all replica sets need to contain the pod-template-hash label
Mar  6 03:38:16.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062696, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:38:18.712: INFO: all replica sets need to contain the pod-template-hash label
Mar  6 03:38:18.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062696, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:38:20.725: INFO: all replica sets need to contain the pod-template-hash label
Mar  6 03:38:20.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062696, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:38:22.709: INFO: all replica sets need to contain the pod-template-hash label
Mar  6 03:38:22.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062696, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:38:24.715: INFO: all replica sets need to contain the pod-template-hash label
Mar  6 03:38:24.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062696, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719062692, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:38:26.711: INFO: 
Mar  6 03:38:26.711: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar  6 03:38:26.720: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-3487 /apis/apps/v1/namespaces/deployment-3487/deployments/test-rollover-deployment c80889ae-fdf2-4d63-838d-9971b25e499d 22365 2 2020-03-06 03:38:12 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00375fa78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-06 03:38:12 +0000 UTC,LastTransitionTime:2020-03-06 03:38:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-06 03:38:26 +0000 UTC,LastTransitionTime:2020-03-06 03:38:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar  6 03:38:26.722: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-3487 /apis/apps/v1/namespaces/deployment-3487/replicasets/test-rollover-deployment-574d6dfbff 4969b549-aa68-4914-a6e4-18b43b3b5b33 22354 2 2020-03-06 03:38:14 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment c80889ae-fdf2-4d63-838d-9971b25e499d 0xc0052dfdc7 0xc0052dfdc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052dfe78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:38:26.722: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Mar  6 03:38:26.722: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-3487 /apis/apps/v1/namespaces/deployment-3487/replicasets/test-rollover-controller b5196659-81c1-4a12-a416-46465507776d 22364 2 2020-03-06 03:38:05 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment c80889ae-fdf2-4d63-838d-9971b25e499d 0xc0052dfc67 0xc0052dfc68}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0052dfd28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:38:26.722: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-3487 /apis/apps/v1/namespaces/deployment-3487/replicasets/test-rollover-deployment-f6c94f66c 5680877c-865b-4b5e-bd4c-1c16c23c849c 22306 2 2020-03-06 03:38:12 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment c80889ae-fdf2-4d63-838d-9971b25e499d 0xc0052dff10 0xc0052dff11}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00373c1e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:38:26.725: INFO: Pod "test-rollover-deployment-574d6dfbff-85p67" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-85p67 test-rollover-deployment-574d6dfbff- deployment-3487 /api/v1/namespaces/deployment-3487/pods/test-rollover-deployment-574d6dfbff-85p67 a9561c4d-ca4d-4451-8ed1-ea6505c2019c 22319 0 2020-03-06 03:38:14 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 4969b549-aa68-4914-a6e4-18b43b3b5b33 0xc00373dc27 0xc00373dc28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4fqx4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4fqx4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4fqx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:38:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:38:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:38:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:38:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.172,StartTime:2020-03-06 03:38:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:38:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://3adec60e1c502c45fd645184159ba3dd18c7fd897028b975c319d1ee658c259a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:38:26.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3487" for this suite.

• [SLOW TEST:21.240 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":142,"skipped":2618,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:38:26.733: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-1189
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:38:42.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1189" for this suite.

• [SLOW TEST:16.174 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":143,"skipped":2640,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:38:42.907: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-9371
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:38:54.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9371" for this suite.

• [SLOW TEST:11.166 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":144,"skipped":2697,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:38:54.073: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-5305
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar  6 03:38:54.209: INFO: Waiting up to 5m0s for pod "pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d" in namespace "emptydir-5305" to be "success or failure"
Mar  6 03:38:54.212: INFO: Pod "pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195259ms
Mar  6 03:38:56.215: INFO: Pod "pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005639128s
STEP: Saw pod success
Mar  6 03:38:56.215: INFO: Pod "pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d" satisfied condition "success or failure"
Mar  6 03:38:56.217: INFO: Trying to get logs from node worker02 pod pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d container test-container: 
STEP: delete the pod
Mar  6 03:38:56.229: INFO: Waiting for pod pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d to disappear
Mar  6 03:38:56.231: INFO: Pod pod-7b5f57e7-95aa-47fb-9957-8d2fbea1173d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:38:56.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5305" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2708,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:38:56.239: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-8282
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8282
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8282
STEP: Creating statefulset with conflicting port in namespace statefulset-8282
STEP: Waiting until pod test-pod will start running in namespace statefulset-8282
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8282
Mar  6 03:38:58.393: INFO: Observed stateful pod in namespace: statefulset-8282, name: ss-0, uid: cca99739-377a-4bac-ad27-18f31c77646c, status phase: Pending. Waiting for statefulset controller to delete.
Mar  6 03:38:58.587: INFO: Observed stateful pod in namespace: statefulset-8282, name: ss-0, uid: cca99739-377a-4bac-ad27-18f31c77646c, status phase: Failed. Waiting for statefulset controller to delete.
Mar  6 03:38:58.595: INFO: Observed stateful pod in namespace: statefulset-8282, name: ss-0, uid: cca99739-377a-4bac-ad27-18f31c77646c, status phase: Failed. Waiting for statefulset controller to delete.
Mar  6 03:38:58.598: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8282
STEP: Removing pod with conflicting port in namespace statefulset-8282
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8282 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar  6 03:39:00.625: INFO: Deleting all statefulset in ns statefulset-8282
Mar  6 03:39:00.627: INFO: Scaling statefulset ss to 0
Mar  6 03:39:20.637: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 03:39:20.639: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:39:20.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8282" for this suite.

• [SLOW TEST:24.415 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":146,"skipped":2722,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:39:20.654: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1905
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Update Demo
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Mar  6 03:39:20.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-1905'
Mar  6 03:39:25.987: INFO: stderr: ""
Mar  6 03:39:25.987: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar  6 03:39:25.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1905'
Mar  6 03:39:41.076: INFO: stderr: ""
Mar  6 03:39:41.076: INFO: stdout: "update-demo-nautilus-nx55m update-demo-nautilus-vvkvl "
Mar  6 03:39:41.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-nx55m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:39:41.152: INFO: stderr: ""
Mar  6 03:39:41.152: INFO: stdout: "true"
Mar  6 03:39:41.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-nx55m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:39:41.215: INFO: stderr: ""
Mar  6 03:39:41.215: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:39:41.215: INFO: validating pod update-demo-nautilus-nx55m
Mar  6 03:39:41.220: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:39:41.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:39:41.220: INFO: update-demo-nautilus-nx55m is verified up and running
Mar  6 03:39:41.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-vvkvl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:39:41.282: INFO: stderr: ""
Mar  6 03:39:41.282: INFO: stdout: "true"
Mar  6 03:39:41.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-vvkvl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:39:41.345: INFO: stderr: ""
Mar  6 03:39:41.345: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:39:41.345: INFO: validating pod update-demo-nautilus-vvkvl
Mar  6 03:39:41.348: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:39:41.348: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:39:41.348: INFO: update-demo-nautilus-vvkvl is verified up and running
STEP: rolling-update to new replication controller
Mar  6 03:39:41.350: INFO: scanned /root for discovery docs: 
Mar  6 03:39:41.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1905'
Mar  6 03:40:03.739: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Mar  6 03:40:03.739: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar  6 03:40:03.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1905'
Mar  6 03:40:03.807: INFO: stderr: ""
Mar  6 03:40:03.807: INFO: stdout: "update-demo-kitten-mq7rg update-demo-kitten-qmw95 update-demo-nautilus-nx55m "
STEP: Replicas for name=update-demo: expected=2 actual=3
Mar  6 03:40:08.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1905'
Mar  6 03:40:08.872: INFO: stderr: ""
Mar  6 03:40:08.872: INFO: stdout: "update-demo-kitten-mq7rg update-demo-kitten-qmw95 "
Mar  6 03:40:08.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-kitten-mq7rg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:40:08.936: INFO: stderr: ""
Mar  6 03:40:08.936: INFO: stdout: "true"
Mar  6 03:40:08.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-kitten-mq7rg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:40:09.000: INFO: stderr: ""
Mar  6 03:40:09.000: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Mar  6 03:40:09.000: INFO: validating pod update-demo-kitten-mq7rg
Mar  6 03:40:09.008: INFO: got data: {
  "image": "kitten.jpg"
}

Mar  6 03:40:09.008: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar  6 03:40:09.008: INFO: update-demo-kitten-mq7rg is verified up and running
Mar  6 03:40:09.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-kitten-qmw95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:40:09.072: INFO: stderr: ""
Mar  6 03:40:09.072: INFO: stdout: "true"
Mar  6 03:40:09.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-kitten-qmw95 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1905'
Mar  6 03:40:09.134: INFO: stderr: ""
Mar  6 03:40:09.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Mar  6 03:40:09.134: INFO: validating pod update-demo-kitten-qmw95
Mar  6 03:40:09.138: INFO: got data: {
  "image": "kitten.jpg"
}

Mar  6 03:40:09.138: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Mar  6 03:40:09.138: INFO: update-demo-kitten-qmw95 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:09.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1905" for this suite.

• [SLOW TEST:48.493 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":147,"skipped":2726,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:09.148: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename lease-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in lease-test-475
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-475" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":148,"skipped":2737,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:09.324: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-402
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar  6 03:40:09.461: INFO: Waiting up to 5m0s for pod "pod-ba9338d2-e654-4d2e-9053-ce681a689c29" in namespace "emptydir-402" to be "success or failure"
Mar  6 03:40:09.464: INFO: Pod "pod-ba9338d2-e654-4d2e-9053-ce681a689c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192753ms
Mar  6 03:40:11.466: INFO: Pod "pod-ba9338d2-e654-4d2e-9053-ce681a689c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00471658s
STEP: Saw pod success
Mar  6 03:40:11.466: INFO: Pod "pod-ba9338d2-e654-4d2e-9053-ce681a689c29" satisfied condition "success or failure"
Mar  6 03:40:11.468: INFO: Trying to get logs from node worker02 pod pod-ba9338d2-e654-4d2e-9053-ce681a689c29 container test-container: 
STEP: delete the pod
Mar  6 03:40:11.480: INFO: Waiting for pod pod-ba9338d2-e654-4d2e-9053-ce681a689c29 to disappear
Mar  6 03:40:11.483: INFO: Pod pod-ba9338d2-e654-4d2e-9053-ce681a689c29 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:11.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-402" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2737,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:11.490: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4369
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-71aa35c9-99a5-4306-993b-54e4d4c7ced7
STEP: Creating configMap with name cm-test-opt-upd-d52493c1-120d-4b6d-aa6c-c543654387aa
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-71aa35c9-99a5-4306-993b-54e4d4c7ced7
STEP: Updating configmap cm-test-opt-upd-d52493c1-120d-4b6d-aa6c-c543654387aa
STEP: Creating configMap with name cm-test-opt-create-28bcc4f2-12f8-4f98-84fd-ed7a8301b06b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:17.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4369" for this suite.

• [SLOW TEST:6.216 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2755,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:17.707: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9213
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:40:17.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd" in namespace "downward-api-9213" to be "success or failure"
Mar  6 03:40:17.850: INFO: Pod "downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.712987ms
Mar  6 03:40:19.856: INFO: Pod "downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007124428s
STEP: Saw pod success
Mar  6 03:40:19.856: INFO: Pod "downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd" satisfied condition "success or failure"
Mar  6 03:40:19.858: INFO: Trying to get logs from node worker02 pod downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd container client-container: 
STEP: delete the pod
Mar  6 03:40:19.875: INFO: Waiting for pod downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd to disappear
Mar  6 03:40:19.876: INFO: Pod downwardapi-volume-4d74cfb0-07b4-46eb-8973-4283e015d8bd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:19.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9213" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2770,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:19.883: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-276
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-276
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-276
STEP: creating replication controller externalsvc in namespace services-276
I0306 03:40:20.050810      19 runners.go:189] Created replication controller with name: externalsvc, namespace: services-276, replica count: 2
I0306 03:40:23.101075      19 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Mar  6 03:40:23.125: INFO: Creating new exec pod
Mar  6 03:40:25.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-276 execpod7m6hm -- /bin/sh -x -c nslookup nodeport-service'
Mar  6 03:40:25.335: INFO: stderr: "+ nslookup nodeport-service\n"
Mar  6 03:40:25.335: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-276.svc.cluster.local\tcanonical name = externalsvc.services-276.svc.cluster.local.\nName:\texternalsvc.services-276.svc.cluster.local\nAddress: 10.107.39.17\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-276, will wait for the garbage collector to delete the pods
Mar  6 03:40:25.398: INFO: Deleting ReplicationController externalsvc took: 7.553687ms
Mar  6 03:40:25.899: INFO: Terminating ReplicationController externalsvc pods took: 500.323843ms
Mar  6 03:40:30.337: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:30.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-276" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:10.480 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":152,"skipped":2770,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:30.363: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-8018
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-02136f6c-6c27-4f10-b37a-ba52fc169ef4
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-02136f6c-6c27-4f10-b37a-ba52fc169ef4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:34.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8018" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2799,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:34.569: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-6325
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Mar  6 03:40:36.718: INFO: Pod pod-hostip-951218ae-5537-4c6a-a894-2d19e352750b has hostIP: 192.168.1.251
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:36.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6325" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2800,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:36.725: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1061
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0306 03:40:46.912868      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:40:46.912: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:46.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1061" for this suite.

• [SLOW TEST:10.196 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":155,"skipped":2834,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:46.922: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-2876
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar  6 03:40:47.055: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  6 03:40:47.063: INFO: Waiting for terminating namespaces to be deleted...
Mar  6 03:40:47.065: INFO: 
Logging pods the kubelet thinks is on node worker01 before test
Mar  6 03:40:47.076: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:40:47.076: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:40:47.076: INFO: simpletest-rc-to-be-deleted-9648k from gc-1061 started at 2020-03-06 03:40:36 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container nginx ready: true, restart count 0
Mar  6 03:40:47.076: INFO: kube-proxy-kcb8f from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:40:47.076: INFO: contour-certgen-82k46 from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:40:47.076: INFO: contour-54748c65f5-gk5sz from projectcontour started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:40:47.076: INFO: kube-flannel-ds-amd64-xxhz9 from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:40:47.076: INFO: kuard-678c676f5d-vsn86 from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:40:47.076: INFO: simpletest-rc-to-be-deleted-8kjg4 from gc-1061 started at 2020-03-06 03:40:36 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container nginx ready: true, restart count 0
Mar  6 03:40:47.076: INFO: envoy-lvmcb from projectcontour started at 2020-03-06 02:30:45 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:40:47.076: INFO: kuard-678c676f5d-m29b6 from default started at 2020-03-06 02:30:49 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:40:47.076: INFO: kuard-678c676f5d-tzsnn from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:40:47.076: INFO: contour-54748c65f5-jl5wz from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:40:47.076: INFO: metrics-server-78799bf646-xrsnn from kube-system started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:40:47.076: INFO: simpletest-rc-to-be-deleted-7bznv from gc-1061 started at 2020-03-06 03:40:36 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.076: INFO: 	Container nginx ready: true, restart count 0
Mar  6 03:40:47.076: INFO: 
Logging pods the kubelet thinks is on node worker02 before test
Mar  6 03:40:47.081: INFO: sonobuoy from sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:40:47.081: INFO: simpletest-rc-to-be-deleted-gz22m from gc-1061 started at 2020-03-06 03:40:36 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container nginx ready: true, restart count 0
Mar  6 03:40:47.081: INFO: kube-proxy-5xxdb from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:40:47.081: INFO: envoy-wgz76 from projectcontour started at 2020-03-06 02:30:55 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:40:47.081: INFO: sonobuoy-e2e-job-67137ff64ac145d3 from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:40:47.081: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:40:47.081: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:40:47.081: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:40:47.081: INFO: simpletest-rc-to-be-deleted-dfcjl from gc-1061 started at 2020-03-06 03:40:36 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container nginx ready: true, restart count 0
Mar  6 03:40:47.081: INFO: kube-flannel-ds-amd64-ztfzf from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:40:47.081: INFO: 	Container kube-flannel ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f999faf31205d1], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2876" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":156,"skipped":2873,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:48.110: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-1989
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-1989
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1989 to expose endpoints map[]
Mar  6 03:40:48.258: INFO: Get endpoints failed (2.589894ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Mar  6 03:40:49.260: INFO: successfully validated that service multi-endpoint-test in namespace services-1989 exposes endpoints map[] (1.004990468s elapsed)
STEP: Creating pod pod1 in namespace services-1989
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1989 to expose endpoints map[pod1:[100]]
Mar  6 03:40:51.278: INFO: successfully validated that service multi-endpoint-test in namespace services-1989 exposes endpoints map[pod1:[100]] (2.011860575s elapsed)
STEP: Creating pod pod2 in namespace services-1989
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1989 to expose endpoints map[pod1:[100] pod2:[101]]
Mar  6 03:40:53.316: INFO: successfully validated that service multi-endpoint-test in namespace services-1989 exposes endpoints map[pod1:[100] pod2:[101]] (2.034484323s elapsed)
STEP: Deleting pod pod1 in namespace services-1989
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1989 to expose endpoints map[pod2:[101]]
Mar  6 03:40:54.336: INFO: successfully validated that service multi-endpoint-test in namespace services-1989 exposes endpoints map[pod2:[101]] (1.015049804s elapsed)
STEP: Deleting pod pod2 in namespace services-1989
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1989 to expose endpoints map[]
Mar  6 03:40:55.346: INFO: successfully validated that service multi-endpoint-test in namespace services-1989 exposes endpoints map[] (1.003978888s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:55.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1989" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:7.288 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":157,"skipped":2895,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:55.398: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-3987
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-e0c16837-4d0c-498a-99e1-dd7cf272cb81
STEP: Creating a pod to test consume secrets
Mar  6 03:40:55.549: INFO: Waiting up to 5m0s for pod "pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26" in namespace "secrets-3987" to be "success or failure"
Mar  6 03:40:55.554: INFO: Pod "pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26": Phase="Pending", Reason="", readiness=false. Elapsed: 5.154689ms
Mar  6 03:40:57.557: INFO: Pod "pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007872819s
STEP: Saw pod success
Mar  6 03:40:57.557: INFO: Pod "pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26" satisfied condition "success or failure"
Mar  6 03:40:57.559: INFO: Trying to get logs from node worker02 pod pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26 container secret-volume-test: 
STEP: delete the pod
Mar  6 03:40:57.573: INFO: Waiting for pod pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26 to disappear
Mar  6 03:40:57.574: INFO: Pod pod-secrets-e32511ae-8b02-4740-851d-883d25e2da26 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:57.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3987" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2906,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:57.581: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-462
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar  6 03:40:57.719: INFO: Waiting up to 5m0s for pod "pod-feb62bab-960c-46bb-92e1-9ddd958bd208" in namespace "emptydir-462" to be "success or failure"
Mar  6 03:40:57.722: INFO: Pod "pod-feb62bab-960c-46bb-92e1-9ddd958bd208": Phase="Pending", Reason="", readiness=false. Elapsed: 2.673244ms
Mar  6 03:40:59.724: INFO: Pod "pod-feb62bab-960c-46bb-92e1-9ddd958bd208": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004761602s
STEP: Saw pod success
Mar  6 03:40:59.724: INFO: Pod "pod-feb62bab-960c-46bb-92e1-9ddd958bd208" satisfied condition "success or failure"
Mar  6 03:40:59.726: INFO: Trying to get logs from node worker02 pod pod-feb62bab-960c-46bb-92e1-9ddd958bd208 container test-container: 
STEP: delete the pod
Mar  6 03:40:59.738: INFO: Waiting for pod pod-feb62bab-960c-46bb-92e1-9ddd958bd208 to disappear
Mar  6 03:40:59.740: INFO: Pod pod-feb62bab-960c-46bb-92e1-9ddd958bd208 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:59.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-462" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2931,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:59.746: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4117
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Mar  6 03:40:59.895: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4117 /api/v1/namespaces/watch-4117/configmaps/e2e-watch-test-resource-version bb10553b-415c-4e6d-9a8b-617f67678fb5 23862 0 2020-03-06 03:40:59 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar  6 03:40:59.896: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4117 /api/v1/namespaces/watch-4117/configmaps/e2e-watch-test-resource-version bb10553b-415c-4e6d-9a8b-617f67678fb5 23863 0 2020-03-06 03:40:59 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:40:59.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4117" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":160,"skipped":2932,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:40:59.903: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-8577
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar  6 03:41:00.036: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  6 03:41:00.044: INFO: Waiting for terminating namespaces to be deleted...
Mar  6 03:41:00.046: INFO: 
Logging pods the kubelet thinks is on node worker01 before test
Mar  6 03:41:00.053: INFO: metrics-server-78799bf646-xrsnn from kube-system started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:41:00.053: INFO: contour-54748c65f5-jl5wz from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:41:00.053: INFO: contour-certgen-82k46 from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:41:00.053: INFO: contour-54748c65f5-gk5sz from projectcontour started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:41:00.053: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:41:00.053: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:41:00.053: INFO: kube-proxy-kcb8f from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:41:00.053: INFO: kuard-678c676f5d-vsn86 from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:41:00.053: INFO: kube-flannel-ds-amd64-xxhz9 from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:41:00.053: INFO: kuard-678c676f5d-m29b6 from default started at 2020-03-06 02:30:49 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:41:00.053: INFO: kuard-678c676f5d-tzsnn from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:41:00.053: INFO: pod2 from services-1989 started at 2020-03-06 03:40:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container pause ready: false, restart count 0
Mar  6 03:41:00.053: INFO: envoy-lvmcb from projectcontour started at 2020-03-06 02:30:45 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.053: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:41:00.053: INFO: 
Logging pods the kubelet thinks is on node worker02 before test
Mar  6 03:41:00.057: INFO: kube-flannel-ds-amd64-ztfzf from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.057: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:41:00.057: INFO: envoy-wgz76 from projectcontour started at 2020-03-06 02:30:55 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.057: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:41:00.057: INFO: sonobuoy-e2e-job-67137ff64ac145d3 from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:41:00.058: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:41:00.058: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:41:00.058: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:41:00.058: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:41:00.058: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:41:00.058: INFO: kube-proxy-5xxdb from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.058: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:41:00.058: INFO: sonobuoy from sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.058: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:41:00.058: INFO: pod1 from services-1989 started at 2020-03-06 03:40:49 +0000 UTC (1 container statuses recorded)
Mar  6 03:41:00.058: INFO: 	Container pause ready: false, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-bac35934-536c-46e6-9da2-d2712bd37a51 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-bac35934-536c-46e6-9da2-d2712bd37a51 off the node worker02
STEP: verifying the node doesn't have the label kubernetes.io/e2e-bac35934-536c-46e6-9da2-d2712bd37a51
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:04.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8577" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":161,"skipped":2954,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:04.117: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-9943
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar  6 03:41:08.282: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar  6 03:41:08.284: INFO: Pod pod-with-poststart-exec-hook still exists
Mar  6 03:41:10.284: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar  6 03:41:10.287: INFO: Pod pod-with-poststart-exec-hook still exists
Mar  6 03:41:12.284: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar  6 03:41:12.287: INFO: Pod pod-with-poststart-exec-hook still exists
Mar  6 03:41:14.284: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar  6 03:41:14.287: INFO: Pod pod-with-poststart-exec-hook still exists
Mar  6 03:41:16.284: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar  6 03:41:16.287: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:16.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9943" for this suite.

• [SLOW TEST:12.177 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2982,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:16.295: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-2735
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:41:16.446: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Mar  6 03:41:21.448: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar  6 03:41:21.448: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar  6 03:41:23.468: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-2735 /apis/apps/v1/namespaces/deployment-2735/deployments/test-cleanup-deployment 594be617-14d8-4e98-a75e-bbf4a7a40d6b 24124 1 2020-03-06 03:41:21 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002913c18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-06 03:41:21 +0000 UTC,LastTransitionTime:2020-03-06 03:41:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-03-06 03:41:22 +0000 UTC,LastTransitionTime:2020-03-06 03:41:21 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar  6 03:41:23.471: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-2735 /apis/apps/v1/namespaces/deployment-2735/replicasets/test-cleanup-deployment-55ffc6b7b6 fd6f951c-7596-4f09-91b1-88d3afebffc8 24113 1 2020-03-06 03:41:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 594be617-14d8-4e98-a75e-bbf4a7a40d6b 0xc0018f9b97 0xc0018f9b98}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0018f9c08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:41:23.473: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-7wwsm" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-7wwsm test-cleanup-deployment-55ffc6b7b6- deployment-2735 /api/v1/namespaces/deployment-2735/pods/test-cleanup-deployment-55ffc6b7b6-7wwsm cc24241b-d0fe-4eb0-af4c-3b698355d2fb 24112 0 2020-03-06 03:41:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 fd6f951c-7596-4f09-91b1-88d3afebffc8 0xc0018f9f87 0xc0018f9f88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9znx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9znx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9znx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:41:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:41:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:41:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.197,StartTime:2020-03-06 03:41:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:41:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://aea2e0fd82590fdaf9a05c04ee65d9641ee4132c94a6189f64a0001980a6bcbb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:23.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2735" for this suite.

• [SLOW TEST:7.186 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":163,"skipped":3008,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:23.480: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1129
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Mar  6 03:41:23.611: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-780690759 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:23.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1129" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":164,"skipped":3013,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:23.674: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-5274
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:41:23.808: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:28.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5274" for this suite.

• [SLOW TEST:5.221 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    getting/updating/patching custom resource definition status sub-resource works  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":165,"skipped":3047,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:28.895: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-265
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:41:29.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-265'
Mar  6 03:41:29.308: INFO: stderr: ""
Mar  6 03:41:29.308: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Mar  6 03:41:29.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-265'
Mar  6 03:41:29.536: INFO: stderr: ""
Mar  6 03:41:29.536: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar  6 03:41:30.538: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 03:41:30.538: INFO: Found 1 / 1
Mar  6 03:41:30.538: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Mar  6 03:41:30.541: INFO: Selector matched 1 pods for map[app:agnhost]
Mar  6 03:41:30.541: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar  6 03:41:30.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 describe pod agnhost-master-l7q9f --namespace=kubectl-265'
Mar  6 03:41:30.622: INFO: stderr: ""
Mar  6 03:41:30.622: INFO: stdout: "Name:         agnhost-master-l7q9f\nNamespace:    kubectl-265\nPriority:     0\nNode:         worker02/192.168.1.251\nStart Time:   Fri, 06 Mar 2020 03:41:29 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.3.198\nIPs:\n  IP:           10.244.3.198\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://a93cc2c2a808d607d75cedbf134751dbecfc1966e5e2003514c9174b8264e571\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 06 Mar 2020 03:41:30 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-h7l2r (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-h7l2r:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-h7l2r\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From               Message\n  ----    ------     ----       ----               -------\n  Normal  Scheduled    default-scheduler  Successfully assigned kubectl-265/agnhost-master-l7q9f to worker02\n  Normal  Pulled     1s         kubelet, worker02  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    0s         kubelet, worker02  Created container agnhost-master\n  Normal  Started    0s         kubelet, worker02  Started container agnhost-master\n"
Mar  6 03:41:30.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 describe rc agnhost-master --namespace=kubectl-265'
Mar  6 03:41:30.699: INFO: stderr: ""
Mar  6 03:41:30.699: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-265\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  1s    replication-controller  Created pod: agnhost-master-l7q9f\n"
Mar  6 03:41:30.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 describe service agnhost-master --namespace=kubectl-265'
Mar  6 03:41:30.769: INFO: stderr: ""
Mar  6 03:41:30.769: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-265\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.102.155.204\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.3.198:6379\nSession Affinity:  None\nEvents:            \n"
Mar  6 03:41:30.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 describe node master01'
Mar  6 03:41:30.853: INFO: stderr: ""
Mar  6 03:41:30.853: INFO: stdout: "Name:               master01\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=master01\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"76:15:82:0d:8b:ab\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 192.168.1.247\n                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 06 Mar 2020 02:29:18 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  master01\n  AcquireTime:     \n  RenewTime:       Fri, 06 Mar 2020 03:41:25 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 06 Mar 2020 03:39:04 +0000   Fri, 06 Mar 2020 02:29:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 06 Mar 2020 03:39:04 +0000   Fri, 06 Mar 2020 02:29:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 06 Mar 2020 03:39:04 +0000   Fri, 06 Mar 2020 02:29:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 06 Mar 2020 03:39:04 +0000   Fri, 06 Mar 2020 02:30:33 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  192.168.1.247\n  Hostname:    master01\nCapacity:\n  cpu:                2\n  ephemeral-storage:  41152812Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3733608Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  37926431477\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3631208Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 20200220105402131453637367482142\n  System UUID:                195205FE-EE72-4794-8EAA-AC554EFDEC9B\n  Boot ID:                    6a3bf627-7476-4f52-84fa-f3eab6d26427\n  Kernel Version:             3.10.0-1062.12.1.el7.x86_64\n  OS Image:                   CentOS Linux 7 (Core)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://19.3.5\n  Kubelet Version:            v1.17.3\n  Kube-Proxy Version:         v1.17.3\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-apiserver-master01                                    250m (12%)    0 (0%)      0 (0%)           0 (0%)         71m\n  kube-system                 kube-controller-manager-master01                           200m (10%)    0 (0%)      0 (0%)           0 (0%)         71m\n  kube-system                 kube-flannel-ds-amd64-6mbnb                                100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      71m\n  kube-system                 kube-proxy-4j8ft                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         71m\n  kube-system                 kube-scheduler-master01                                    100m (5%)     0 (0%)      0 (0%)           0 (0%)         71m\n  sonobuoy                    sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                650m (32%)  100m (5%)\n  memory             50Mi (1%)   50Mi (1%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Mar  6 03:41:30.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 describe namespace kubectl-265'
Mar  6 03:41:30.929: INFO: stderr: ""
Mar  6 03:41:30.929: INFO: stdout: "Name:         kubectl-265\nLabels:       e2e-framework=kubectl\n              e2e-run=e0336c13-b471-4627-93ef-421cefc2a866\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:30.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-265" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":166,"skipped":3098,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:30.936: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-2627
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:33.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2627" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":167,"skipped":3133,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:33.107: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-3389
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-c08cd7e2-9da6-4532-8660-a6ea671c6e93
STEP: Creating a pod to test consume configMaps
Mar  6 03:41:33.247: INFO: Waiting up to 5m0s for pod "pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9" in namespace "configmap-3389" to be "success or failure"
Mar  6 03:41:33.249: INFO: Pod "pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045809ms
Mar  6 03:41:35.251: INFO: Pod "pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004675495s
STEP: Saw pod success
Mar  6 03:41:35.251: INFO: Pod "pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9" satisfied condition "success or failure"
Mar  6 03:41:35.254: INFO: Trying to get logs from node worker02 pod pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9 container configmap-volume-test: 
STEP: delete the pod
Mar  6 03:41:35.266: INFO: Waiting for pod pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9 to disappear
Mar  6 03:41:35.268: INFO: Pod pod-configmaps-11fe573d-70da-45d7-b0b8-3cc478c105a9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:35.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3389" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":3161,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:35.275: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-9383
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Mar  6 03:41:35.413: INFO: Waiting up to 5m0s for pod "var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c" in namespace "var-expansion-9383" to be "success or failure"
Mar  6 03:41:35.415: INFO: Pod "var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074812ms
Mar  6 03:41:37.419: INFO: Pod "var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006107315s
STEP: Saw pod success
Mar  6 03:41:37.419: INFO: Pod "var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c" satisfied condition "success or failure"
Mar  6 03:41:37.422: INFO: Trying to get logs from node worker02 pod var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c container dapi-container: 
STEP: delete the pod
Mar  6 03:41:37.436: INFO: Waiting for pod var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c to disappear
Mar  6 03:41:37.440: INFO: Pod var-expansion-bb918db6-e692-4791-a8ff-2e9e6f5b903c no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:37.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9383" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":3192,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:37.448: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-5783
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar  6 03:41:37.596: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:37.596: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:37.596: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:37.598: INFO: Number of nodes with available pods: 0
Mar  6 03:41:37.598: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:41:38.601: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:38.601: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:38.601: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:38.606: INFO: Number of nodes with available pods: 0
Mar  6 03:41:38.606: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:41:39.601: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:39.601: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:39.601: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:39.604: INFO: Number of nodes with available pods: 2
Mar  6 03:41:39.604: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Mar  6 03:41:39.622: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:39.622: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:39.622: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:39.624: INFO: Number of nodes with available pods: 1
Mar  6 03:41:39.624: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:41:40.630: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:40.630: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:40.630: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:40.638: INFO: Number of nodes with available pods: 1
Mar  6 03:41:40.638: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:41:41.628: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:41.628: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:41.628: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:41.630: INFO: Number of nodes with available pods: 1
Mar  6 03:41:41.630: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:41:42.628: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:42.628: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:42.628: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:42.630: INFO: Number of nodes with available pods: 1
Mar  6 03:41:42.630: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:41:43.628: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:43.628: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:43.628: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:43.630: INFO: Number of nodes with available pods: 1
Mar  6 03:41:43.630: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:41:44.628: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:44.628: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:44.628: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:41:44.630: INFO: Number of nodes with available pods: 2
Mar  6 03:41:44.630: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5783, will wait for the garbage collector to delete the pods
Mar  6 03:41:44.690: INFO: Deleting DaemonSet.extensions daemon-set took: 6.362238ms
Mar  6 03:41:44.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.11671ms
Mar  6 03:41:55.393: INFO: Number of nodes with available pods: 0
Mar  6 03:41:55.393: INFO: Number of running nodes: 0, number of available pods: 0
Mar  6 03:41:55.395: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5783/daemonsets","resourceVersion":"24480"},"items":null}

Mar  6 03:41:55.396: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5783/pods","resourceVersion":"24480"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:55.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5783" for this suite.

• [SLOW TEST:17.967 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":170,"skipped":3222,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:55.415: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9115
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:41:55.553: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5" in namespace "downward-api-9115" to be "success or failure"
Mar  6 03:41:55.556: INFO: Pod "downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829496ms
Mar  6 03:41:57.558: INFO: Pod "downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005268493s
STEP: Saw pod success
Mar  6 03:41:57.558: INFO: Pod "downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5" satisfied condition "success or failure"
Mar  6 03:41:57.561: INFO: Trying to get logs from node worker02 pod downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5 container client-container: 
STEP: delete the pod
Mar  6 03:41:57.575: INFO: Waiting for pod downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5 to disappear
Mar  6 03:41:57.579: INFO: Pod downwardapi-volume-04a608f1-b6d8-474b-8669-b208b3426aa5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:57.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9115" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":3282,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:57.586: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-6267
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:41:57.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6267" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":172,"skipped":3309,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:41:57.730: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-6478
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:41:57.930: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:03.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6478" for this suite.

• [SLOW TEST:6.228 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":173,"skipped":3328,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:03.958: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7137
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar  6 03:42:04.092: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  6 03:42:04.099: INFO: Waiting for terminating namespaces to be deleted...
Mar  6 03:42:04.101: INFO: 
Logging pods the kubelet thinks is on node worker01 before test
Mar  6 03:42:04.106: INFO: envoy-lvmcb from projectcontour started at 2020-03-06 02:30:45 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:42:04.106: INFO: kuard-678c676f5d-m29b6 from default started at 2020-03-06 02:30:49 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:42:04.106: INFO: kuard-678c676f5d-tzsnn from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:42:04.106: INFO: contour-54748c65f5-jl5wz from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:42:04.106: INFO: metrics-server-78799bf646-xrsnn from kube-system started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:42:04.106: INFO: kube-proxy-kcb8f from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:42:04.106: INFO: contour-certgen-82k46 from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:42:04.106: INFO: contour-54748c65f5-gk5sz from projectcontour started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:42:04.106: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:42:04.106: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:42:04.106: INFO: kube-flannel-ds-amd64-xxhz9 from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:42:04.106: INFO: kuard-678c676f5d-vsn86 from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.106: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:42:04.106: INFO: 
Logging pods the kubelet thinks is on node worker02 before test
Mar  6 03:42:04.110: INFO: kube-proxy-5xxdb from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.110: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:42:04.110: INFO: sonobuoy from sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.110: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:42:04.110: INFO: kube-flannel-ds-amd64-ztfzf from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.110: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:42:04.110: INFO: envoy-wgz76 from projectcontour started at 2020-03-06 02:30:55 +0000 UTC (1 container statuses recorded)
Mar  6 03:42:04.110: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:42:04.110: INFO: sonobuoy-e2e-job-67137ff64ac145d3 from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:42:04.110: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:42:04.110: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:42:04.110: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:42:04.110: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:42:04.110: INFO: 	Container systemd-logs ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-96bd5083-a413-4a96-9768-95c82c709b79 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-96bd5083-a413-4a96-9768-95c82c709b79 off the node worker02
STEP: verifying the node doesn't have the label kubernetes.io/e2e-96bd5083-a413-4a96-9768-95c82c709b79
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:12.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7137" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:8.236 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":174,"skipped":3337,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:12.195: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-4595
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Mar  6 03:42:12.339: INFO: Created pod &Pod{ObjectMeta:{dns-4595  dns-4595 /api/v1/namespaces/dns-4595/pods/dns-4595 dea69b10-7f52-4af4-b032-e341f4059cc7 24675 0 2020-03-06 03:42:12 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ctmvh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ctmvh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ctmvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Mar  6 03:42:14.343: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4595 PodName:dns-4595 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:42:14.343: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Verifying customized DNS server is configured on pod...
Mar  6 03:42:14.451: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4595 PodName:dns-4595 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:42:14.451: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:42:14.604: INFO: Deleting pod dns-4595...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:14.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4595" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":175,"skipped":3348,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:14.631: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-5332
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Mar  6 03:42:14.769: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar  6 03:42:21.792: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:21.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5332" for this suite.

• [SLOW TEST:7.169 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":3350,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:21.800: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-2724
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:42:21.948: INFO: Waiting up to 5m0s for pod "downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148" in namespace "downward-api-2724" to be "success or failure"
Mar  6 03:42:21.950: INFO: Pod "downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148": Phase="Pending", Reason="", readiness=false. Elapsed: 1.948277ms
Mar  6 03:42:23.952: INFO: Pod "downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004336962s
STEP: Saw pod success
Mar  6 03:42:23.952: INFO: Pod "downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148" satisfied condition "success or failure"
Mar  6 03:42:23.954: INFO: Trying to get logs from node worker02 pod downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148 container client-container: 
STEP: delete the pod
Mar  6 03:42:23.968: INFO: Waiting for pod downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148 to disappear
Mar  6 03:42:23.970: INFO: Pod downwardapi-volume-61855b7b-a034-4c16-8c6c-fb920cfd7148 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:23.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2724" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":3364,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:23.977: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9712
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 03:42:24.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9712'
Mar  6 03:42:24.181: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar  6 03:42:24.181: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Mar  6 03:42:24.192: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Mar  6 03:42:24.197: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Mar  6 03:42:24.215: INFO: scanned /root for discovery docs: 
Mar  6 03:42:24.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9712'
Mar  6 03:42:39.957: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Mar  6 03:42:39.957: INFO: stdout: "Created e2e-test-httpd-rc-42c291e225641f6072a939e6268069de\nScaling up e2e-test-httpd-rc-42c291e225641f6072a939e6268069de from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-42c291e225641f6072a939e6268069de up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-42c291e225641f6072a939e6268069de to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Mar  6 03:42:39.957: INFO: stdout: "Created e2e-test-httpd-rc-42c291e225641f6072a939e6268069de\nScaling up e2e-test-httpd-rc-42c291e225641f6072a939e6268069de from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-42c291e225641f6072a939e6268069de up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-42c291e225641f6072a939e6268069de to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Mar  6 03:42:39.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9712'
Mar  6 03:42:40.022: INFO: stderr: ""
Mar  6 03:42:40.022: INFO: stdout: "e2e-test-httpd-rc-42c291e225641f6072a939e6268069de-tp8nz "
Mar  6 03:42:40.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods e2e-test-httpd-rc-42c291e225641f6072a939e6268069de-tp8nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9712'
Mar  6 03:42:40.086: INFO: stderr: ""
Mar  6 03:42:40.086: INFO: stdout: "true"
Mar  6 03:42:40.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods e2e-test-httpd-rc-42c291e225641f6072a939e6268069de-tp8nz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9712'
Mar  6 03:42:40.156: INFO: stderr: ""
Mar  6 03:42:40.156: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Mar  6 03:42:40.156: INFO: e2e-test-httpd-rc-42c291e225641f6072a939e6268069de-tp8nz is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698
Mar  6 03:42:40.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete rc e2e-test-httpd-rc --namespace=kubectl-9712'
Mar  6 03:42:40.227: INFO: stderr: ""
Mar  6 03:42:40.227: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:40.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9712" for this suite.

• [SLOW TEST:16.259 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":178,"skipped":3380,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:40.236: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename namespaces
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-443
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9294
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3734
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:53.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-443" for this suite.
STEP: Destroying namespace "nsdeletetest-9294" for this suite.
Mar  6 03:42:53.656: INFO: Namespace nsdeletetest-9294 was already deleted
STEP: Destroying namespace "nsdeletetest-3734" for this suite.

• [SLOW TEST:13.423 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":179,"skipped":3396,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:53.660: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-2400
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar  6 03:42:53.792: INFO: Waiting up to 5m0s for pod "pod-2485caff-9ac9-4b63-8d7b-86840828fdf2" in namespace "emptydir-2400" to be "success or failure"
Mar  6 03:42:53.794: INFO: Pod "pod-2485caff-9ac9-4b63-8d7b-86840828fdf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166112ms
Mar  6 03:42:55.797: INFO: Pod "pod-2485caff-9ac9-4b63-8d7b-86840828fdf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004743563s
STEP: Saw pod success
Mar  6 03:42:55.797: INFO: Pod "pod-2485caff-9ac9-4b63-8d7b-86840828fdf2" satisfied condition "success or failure"
Mar  6 03:42:55.799: INFO: Trying to get logs from node worker02 pod pod-2485caff-9ac9-4b63-8d7b-86840828fdf2 container test-container: 
STEP: delete the pod
Mar  6 03:42:55.818: INFO: Waiting for pod pod-2485caff-9ac9-4b63-8d7b-86840828fdf2 to disappear
Mar  6 03:42:55.819: INFO: Pod pod-2485caff-9ac9-4b63-8d7b-86840828fdf2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:55.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2400" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3398,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:55.826: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-2834
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Mar  6 03:42:55.964: INFO: Waiting up to 5m0s for pod "client-containers-eefde9b0-bb20-42dc-b912-0578a7399925" in namespace "containers-2834" to be "success or failure"
Mar  6 03:42:55.967: INFO: Pod "client-containers-eefde9b0-bb20-42dc-b912-0578a7399925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125729ms
Mar  6 03:42:57.969: INFO: Pod "client-containers-eefde9b0-bb20-42dc-b912-0578a7399925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004577401s
STEP: Saw pod success
Mar  6 03:42:57.969: INFO: Pod "client-containers-eefde9b0-bb20-42dc-b912-0578a7399925" satisfied condition "success or failure"
Mar  6 03:42:57.971: INFO: Trying to get logs from node worker02 pod client-containers-eefde9b0-bb20-42dc-b912-0578a7399925 container test-container: 
STEP: delete the pod
Mar  6 03:42:57.984: INFO: Waiting for pod client-containers-eefde9b0-bb20-42dc-b912-0578a7399925 to disappear
Mar  6 03:42:57.986: INFO: Pod client-containers-eefde9b0-bb20-42dc-b912-0578a7399925 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:42:57.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2834" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":3406,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:42:57.995: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-6546
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl logs
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464
STEP: creating an pod
Mar  6 03:42:58.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6546 -- logs-generator --log-lines-total 100 --run-duration 20s'
Mar  6 03:42:58.216: INFO: stderr: ""
Mar  6 03:42:58.216: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Mar  6 03:42:58.216: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Mar  6 03:42:58.216: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6546" to be "running and ready, or succeeded"
Mar  6 03:42:58.220: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27369ms
Mar  6 03:43:00.222: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.006717839s
Mar  6 03:43:00.222: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Mar  6 03:43:00.222: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Mar  6 03:43:00.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs logs-generator logs-generator --namespace=kubectl-6546'
Mar  6 03:43:00.293: INFO: stderr: ""
Mar  6 03:43:00.293: INFO: stdout: "I0306 03:42:58.997654       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/nk5q 271\nI0306 03:42:59.197747       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/4c9t 305\nI0306 03:42:59.397807       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/hwjv 443\nI0306 03:42:59.597790       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/6gj 300\nI0306 03:42:59.797748       1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/r2xh 453\nI0306 03:42:59.997742       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/xc2h 251\nI0306 03:43:00.197755       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/qg8 258\n"
STEP: limiting log lines
Mar  6 03:43:00.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs logs-generator logs-generator --namespace=kubectl-6546 --tail=1'
Mar  6 03:43:00.370: INFO: stderr: ""
Mar  6 03:43:00.370: INFO: stdout: "I0306 03:43:00.197755       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/qg8 258\n"
Mar  6 03:43:00.370: INFO: got output "I0306 03:43:00.197755       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/qg8 258\n"
STEP: limiting log bytes
Mar  6 03:43:00.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs logs-generator logs-generator --namespace=kubectl-6546 --limit-bytes=1'
Mar  6 03:43:00.441: INFO: stderr: ""
Mar  6 03:43:00.441: INFO: stdout: "I"
Mar  6 03:43:00.441: INFO: got output "I"
STEP: exposing timestamps
Mar  6 03:43:00.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs logs-generator logs-generator --namespace=kubectl-6546 --tail=1 --timestamps'
Mar  6 03:43:00.509: INFO: stderr: ""
Mar  6 03:43:00.509: INFO: stdout: "2020-03-06T03:43:00.399045165Z I0306 03:43:00.397767       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/vkc 396\n"
Mar  6 03:43:00.509: INFO: got output "2020-03-06T03:43:00.399045165Z I0306 03:43:00.397767       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/vkc 396\n"
STEP: restricting to a time range
Mar  6 03:43:03.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs logs-generator logs-generator --namespace=kubectl-6546 --since=1s'
Mar  6 03:43:03.079: INFO: stderr: ""
Mar  6 03:43:03.079: INFO: stdout: "I0306 03:43:02.197755       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/nsf 366\nI0306 03:43:02.397753       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/vcs 537\nI0306 03:43:02.597754       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/n7n 365\nI0306 03:43:02.797750       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/jdz 347\nI0306 03:43:02.997780       1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/hnj 489\n"
Mar  6 03:43:03.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs logs-generator logs-generator --namespace=kubectl-6546 --since=24h'
Mar  6 03:43:03.161: INFO: stderr: ""
Mar  6 03:43:03.161: INFO: stdout: "I0306 03:42:58.997654       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/nk5q 271\nI0306 03:42:59.197747       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/4c9t 305\nI0306 03:42:59.397807       1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/hwjv 443\nI0306 03:42:59.597790       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/6gj 300\nI0306 03:42:59.797748       1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/r2xh 453\nI0306 03:42:59.997742       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/xc2h 251\nI0306 03:43:00.197755       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/qg8 258\nI0306 03:43:00.397767       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/vkc 396\nI0306 03:43:00.597750       1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/mtl 459\nI0306 03:43:00.797764       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/2x5k 448\nI0306 03:43:00.997754       1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/khlh 579\nI0306 03:43:01.197755       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/c9gs 305\nI0306 03:43:01.397751       1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/sxz 308\nI0306 03:43:01.597785       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/nmhh 334\nI0306 03:43:01.797743       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/4zzw 557\nI0306 03:43:01.997757       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/dv2b 485\nI0306 03:43:02.197755       1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/nsf 366\nI0306 03:43:02.397753       1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/vcs 537\nI0306 03:43:02.597754       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/n7n 365\nI0306 03:43:02.797750       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/jdz 347\nI0306 03:43:02.997780       1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/hnj 489\n"
[AfterEach] Kubectl logs
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470
Mar  6 03:43:03.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete pod logs-generator --namespace=kubectl-6546'
Mar  6 03:43:15.152: INFO: stderr: ""
Mar  6 03:43:15.152: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6546" for this suite.

• [SLOW TEST:17.163 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":182,"skipped":3446,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:15.159: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9798
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382
STEP: creating the pod
Mar  6 03:43:15.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-9798'
Mar  6 03:43:15.479: INFO: stderr: ""
Mar  6 03:43:15.479: INFO: stdout: "pod/pause created\n"
Mar  6 03:43:15.479: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Mar  6 03:43:15.479: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9798" to be "running and ready"
Mar  6 03:43:15.482: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589344ms
Mar  6 03:43:17.484: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.004637159s
Mar  6 03:43:17.484: INFO: Pod "pause" satisfied condition "running and ready"
Mar  6 03:43:17.484: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Mar  6 03:43:17.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 label pods pause testing-label=testing-label-value --namespace=kubectl-9798'
Mar  6 03:43:17.554: INFO: stderr: ""
Mar  6 03:43:17.554: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Mar  6 03:43:17.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pod pause -L testing-label --namespace=kubectl-9798'
Mar  6 03:43:17.631: INFO: stderr: ""
Mar  6 03:43:17.631: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          2s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Mar  6 03:43:17.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 label pods pause testing-label- --namespace=kubectl-9798'
Mar  6 03:43:17.699: INFO: stderr: ""
Mar  6 03:43:17.699: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Mar  6 03:43:17.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pod pause -L testing-label --namespace=kubectl-9798'
Mar  6 03:43:17.766: INFO: stderr: ""
Mar  6 03:43:17.766: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          2s    \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389
STEP: using delete to clean up resources
Mar  6 03:43:17.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-9798'
Mar  6 03:43:17.834: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:43:17.834: INFO: stdout: "pod \"pause\" force deleted\n"
Mar  6 03:43:17.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get rc,svc -l name=pause --no-headers --namespace=kubectl-9798'
Mar  6 03:43:17.936: INFO: stderr: "No resources found in kubectl-9798 namespace.\n"
Mar  6 03:43:17.936: INFO: stdout: ""
Mar  6 03:43:17.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -l name=pause --namespace=kubectl-9798 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar  6 03:43:18.020: INFO: stderr: ""
Mar  6 03:43:18.020: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:18.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9798" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":183,"skipped":3452,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:18.029: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3416
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:18.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3416" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":184,"skipped":3452,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:18.165: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-4289
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar  6 03:43:20.325: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:20.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4289" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3458,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:20.343: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-9629
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Update Demo
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Mar  6 03:43:20.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-9629'
Mar  6 03:43:20.618: INFO: stderr: ""
Mar  6 03:43:20.618: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar  6 03:43:20.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9629'
Mar  6 03:43:20.690: INFO: stderr: ""
Mar  6 03:43:20.690: INFO: stdout: "update-demo-nautilus-kh7l8 update-demo-nautilus-pvr4w "
Mar  6 03:43:20.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:20.752: INFO: stderr: ""
Mar  6 03:43:20.752: INFO: stdout: ""
Mar  6 03:43:20.752: INFO: update-demo-nautilus-kh7l8 is created but not running
Mar  6 03:43:25.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9629'
Mar  6 03:43:25.817: INFO: stderr: ""
Mar  6 03:43:25.817: INFO: stdout: "update-demo-nautilus-kh7l8 update-demo-nautilus-pvr4w "
Mar  6 03:43:25.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:25.878: INFO: stderr: ""
Mar  6 03:43:25.878: INFO: stdout: "true"
Mar  6 03:43:25.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:25.940: INFO: stderr: ""
Mar  6 03:43:25.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:43:25.940: INFO: validating pod update-demo-nautilus-kh7l8
Mar  6 03:43:25.944: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:43:25.944: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:43:25.944: INFO: update-demo-nautilus-kh7l8 is verified up and running
Mar  6 03:43:25.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-pvr4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:26.015: INFO: stderr: ""
Mar  6 03:43:26.015: INFO: stdout: "true"
Mar  6 03:43:26.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-pvr4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:26.076: INFO: stderr: ""
Mar  6 03:43:26.076: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:43:26.076: INFO: validating pod update-demo-nautilus-pvr4w
Mar  6 03:43:26.080: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:43:26.080: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:43:26.080: INFO: update-demo-nautilus-pvr4w is verified up and running
STEP: scaling down the replication controller
Mar  6 03:43:26.081: INFO: scanned /root for discovery docs: 
Mar  6 03:43:26.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9629'
Mar  6 03:43:27.168: INFO: stderr: ""
Mar  6 03:43:27.168: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar  6 03:43:27.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9629'
Mar  6 03:43:27.232: INFO: stderr: ""
Mar  6 03:43:27.232: INFO: stdout: "update-demo-nautilus-kh7l8 update-demo-nautilus-pvr4w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Mar  6 03:43:32.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9629'
Mar  6 03:43:32.298: INFO: stderr: ""
Mar  6 03:43:32.298: INFO: stdout: "update-demo-nautilus-kh7l8 "
Mar  6 03:43:32.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:32.359: INFO: stderr: ""
Mar  6 03:43:32.359: INFO: stdout: "true"
Mar  6 03:43:32.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:32.420: INFO: stderr: ""
Mar  6 03:43:32.420: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:43:32.420: INFO: validating pod update-demo-nautilus-kh7l8
Mar  6 03:43:32.423: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:43:32.423: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:43:32.423: INFO: update-demo-nautilus-kh7l8 is verified up and running
STEP: scaling up the replication controller
Mar  6 03:43:32.424: INFO: scanned /root for discovery docs: 
Mar  6 03:43:32.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9629'
Mar  6 03:43:33.507: INFO: stderr: ""
Mar  6 03:43:33.507: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar  6 03:43:33.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9629'
Mar  6 03:43:33.572: INFO: stderr: ""
Mar  6 03:43:33.572: INFO: stdout: "update-demo-nautilus-hj7wq update-demo-nautilus-kh7l8 "
Mar  6 03:43:33.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-hj7wq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:33.642: INFO: stderr: ""
Mar  6 03:43:33.642: INFO: stdout: ""
Mar  6 03:43:33.642: INFO: update-demo-nautilus-hj7wq is created but not running
Mar  6 03:43:38.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9629'
Mar  6 03:43:38.706: INFO: stderr: ""
Mar  6 03:43:38.706: INFO: stdout: "update-demo-nautilus-hj7wq update-demo-nautilus-kh7l8 "
Mar  6 03:43:38.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-hj7wq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:38.770: INFO: stderr: ""
Mar  6 03:43:38.770: INFO: stdout: "true"
Mar  6 03:43:38.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-hj7wq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:38.831: INFO: stderr: ""
Mar  6 03:43:38.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:43:38.831: INFO: validating pod update-demo-nautilus-hj7wq
Mar  6 03:43:38.835: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:43:38.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:43:38.835: INFO: update-demo-nautilus-hj7wq is verified up and running
Mar  6 03:43:38.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:38.896: INFO: stderr: ""
Mar  6 03:43:38.896: INFO: stdout: "true"
Mar  6 03:43:38.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-kh7l8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9629'
Mar  6 03:43:38.958: INFO: stderr: ""
Mar  6 03:43:38.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:43:38.958: INFO: validating pod update-demo-nautilus-kh7l8
Mar  6 03:43:38.960: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:43:38.960: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:43:38.960: INFO: update-demo-nautilus-kh7l8 is verified up and running
STEP: using delete to clean up resources
Mar  6 03:43:38.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-9629'
Mar  6 03:43:39.027: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:43:39.027: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar  6 03:43:39.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9629'
Mar  6 03:43:39.096: INFO: stderr: "No resources found in kubectl-9629 namespace.\n"
Mar  6 03:43:39.096: INFO: stdout: ""
Mar  6 03:43:39.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -l name=update-demo --namespace=kubectl-9629 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar  6 03:43:39.163: INFO: stderr: ""
Mar  6 03:43:39.163: INFO: stdout: "update-demo-nautilus-hj7wq\nupdate-demo-nautilus-kh7l8\n"
Mar  6 03:43:39.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9629'
Mar  6 03:43:39.762: INFO: stderr: "No resources found in kubectl-9629 namespace.\n"
Mar  6 03:43:39.762: INFO: stdout: ""
Mar  6 03:43:39.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -l name=update-demo --namespace=kubectl-9629 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar  6 03:43:39.841: INFO: stderr: ""
Mar  6 03:43:39.841: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:39.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9629" for this suite.

• [SLOW TEST:19.505 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":186,"skipped":3463,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:39.849: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-1018
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar  6 03:43:39.995: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:39.995: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:39.995: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:39.999: INFO: Number of nodes with available pods: 0
Mar  6 03:43:39.999: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:43:41.002: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:41.002: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:41.002: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:41.006: INFO: Number of nodes with available pods: 0
Mar  6 03:43:41.006: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:43:42.004: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:42.004: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:42.004: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:42.008: INFO: Number of nodes with available pods: 2
Mar  6 03:43:42.008: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar  6 03:43:42.027: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:42.027: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:42.027: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:43:42.032: INFO: Number of nodes with available pods: 2
Mar  6 03:43:42.032: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1018, will wait for the garbage collector to delete the pods
Mar  6 03:43:43.096: INFO: Deleting DaemonSet.extensions daemon-set took: 5.251024ms
Mar  6 03:43:43.196: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.111786ms
Mar  6 03:43:55.399: INFO: Number of nodes with available pods: 0
Mar  6 03:43:55.399: INFO: Number of running nodes: 0, number of available pods: 0
Mar  6 03:43:55.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1018/daemonsets","resourceVersion":"25545"},"items":null}

Mar  6 03:43:55.405: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1018/pods","resourceVersion":"25545"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:55.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1018" for this suite.

• [SLOW TEST:15.577 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":187,"skipped":3470,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
S
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:55.425: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-8715
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:43:59.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8715" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":188,"skipped":3471,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:43:59.731: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-3867
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-3867
STEP: creating replication controller nodeport-test in namespace services-3867
I0306 03:43:59.885271      19 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3867, replica count: 2
I0306 03:44:02.935515      19 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar  6 03:44:02.935: INFO: Creating new exec pod
Mar  6 03:44:05.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-3867 execpodx5p9g -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Mar  6 03:44:06.125: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Mar  6 03:44:06.125: INFO: stdout: ""
Mar  6 03:44:06.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-3867 execpodx5p9g -- /bin/sh -x -c nc -zv -t -w 2 10.102.191.58 80'
Mar  6 03:44:06.344: INFO: stderr: "+ nc -zv -t -w 2 10.102.191.58 80\nConnection to 10.102.191.58 80 port [tcp/http] succeeded!\n"
Mar  6 03:44:06.344: INFO: stdout: ""
Mar  6 03:44:06.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-3867 execpodx5p9g -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.250 30493'
Mar  6 03:44:06.541: INFO: stderr: "+ nc -zv -t -w 2 192.168.1.250 30493\nConnection to 192.168.1.250 30493 port [tcp/30493] succeeded!\n"
Mar  6 03:44:06.541: INFO: stdout: ""
Mar  6 03:44:06.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-3867 execpodx5p9g -- /bin/sh -x -c nc -zv -t -w 2 192.168.1.251 30493'
Mar  6 03:44:06.758: INFO: stderr: "+ nc -zv -t -w 2 192.168.1.251 30493\nConnection to 192.168.1.251 30493 port [tcp/30493] succeeded!\n"
Mar  6 03:44:06.758: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:06.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3867" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:7.035 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":189,"skipped":3481,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:06.766: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-lifecycle-hook-3417
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Mar  6 03:44:10.930: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:10.933: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:12.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:12.936: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:14.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:14.936: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:16.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:16.936: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:18.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:18.936: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:20.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:20.942: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:22.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:22.936: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:24.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:24.936: INFO: Pod pod-with-poststart-http-hook still exists
Mar  6 03:44:26.933: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar  6 03:44:26.935: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:26.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3417" for this suite.

• [SLOW TEST:20.176 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3489,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:26.942: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-1167
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar  6 03:44:27.077: INFO: Waiting up to 5m0s for pod "downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d" in namespace "downward-api-1167" to be "success or failure"
Mar  6 03:44:27.082: INFO: Pod "downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317392ms
Mar  6 03:44:29.084: INFO: Pod "downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006981332s
STEP: Saw pod success
Mar  6 03:44:29.084: INFO: Pod "downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d" satisfied condition "success or failure"
Mar  6 03:44:29.087: INFO: Trying to get logs from node worker02 pod downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d container dapi-container: 
STEP: delete the pod
Mar  6 03:44:29.106: INFO: Waiting for pod downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d to disappear
Mar  6 03:44:29.109: INFO: Pod downward-api-488a4d42-3d88-45b6-8b8b-3d073c98596d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:29.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1167" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3499,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:29.119: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-7006
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:37.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7006" for this suite.

• [SLOW TEST:8.141 seconds]
[sig-apps] Job
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":192,"skipped":3516,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:37.261: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-545
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:44:37.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789" in namespace "projected-545" to be "success or failure"
Mar  6 03:44:37.401: INFO: Pod "downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021293ms
Mar  6 03:44:39.406: INFO: Pod "downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007269618s
STEP: Saw pod success
Mar  6 03:44:39.406: INFO: Pod "downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789" satisfied condition "success or failure"
Mar  6 03:44:39.410: INFO: Trying to get logs from node worker02 pod downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789 container client-container: 
STEP: delete the pod
Mar  6 03:44:39.425: INFO: Waiting for pod downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789 to disappear
Mar  6 03:44:39.427: INFO: Pod downwardapi-volume-3cad3b96-ab0f-4533-89a3-15afe72c7789 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:39.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-545" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3524,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:39.433: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-8715
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8715.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8715.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8715.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:44:43.598: INFO: DNS probes using dns-8715/dns-test-05c2922a-0eaa-48c5-9685-5132c7b9c26d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:43.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8715" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":194,"skipped":3534,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:43.639: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-4655
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:44:47.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4655" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3551,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:44:47.787: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-2757
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:44:48.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:44:50.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063088, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063088, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063088, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063088, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:44:53.189: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
Mar  6 03:45:03.210: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:45:13.323: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:45:23.421: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:45:33.523: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:45:43.532: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:45:43.532: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-2757".
STEP: Found 6 events.
Mar  6 03:45:43.535: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-cw6kj: {default-scheduler } Scheduled: Successfully assigned webhook-2757/sample-webhook-deployment-5f65f8c764-cw6kj to worker02
Mar  6 03:45:43.535: INFO: At 2020-03-06 03:44:48 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:45:43.535: INFO: At 2020-03-06 03:44:48 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-cw6kj
Mar  6 03:45:43.535: INFO: At 2020-03-06 03:44:48 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-cw6kj: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:45:43.535: INFO: At 2020-03-06 03:44:48 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-cw6kj: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:45:43.535: INFO: At 2020-03-06 03:44:49 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-cw6kj: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:45:43.537: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:45:43.537: INFO: sample-webhook-deployment-5f65f8c764-cw6kj  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:44:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:44:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:44:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:44:48 +0000 UTC  }]
Mar  6 03:45:43.537: INFO: 
Mar  6 03:45:43.540: INFO: 
Logging node info for node master01
Mar  6 03:45:43.541: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 25757 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:44:04 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:45:43.542: INFO: 
Logging kubelet events for node master01
Mar  6 03:45:43.544: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:45:43.553: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.553: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:45:43.553: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:45:43.553: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.553: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:45:43.553: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.553: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:45:43.553: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.553: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:45:43.553: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.553: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:45:43.554: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:45:43.554: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:45:43.554: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:45:43.556572      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:45:43.578: INFO: 
Latency metrics for node master01
Mar  6 03:45:43.578: INFO: 
Logging node info for node master02
Mar  6 03:45:43.580: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 25725 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:45:43.580: INFO: 
Logging kubelet events for node master02
Mar  6 03:45:43.582: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:45:43.593: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:45:43.593: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:45:43.593: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:45:43.593: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:45:43.593: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:45:43.593: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:45:43.593: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:45:43.593: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:45:43.593: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:45:43.593: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:45:43.597323      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:45:43.623: INFO: 
Latency metrics for node master02
Mar  6 03:45:43.623: INFO: 
Logging node info for node master03
Mar  6 03:45:43.624: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 25732 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:44:01 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:45:43.625: INFO: 
Logging kubelet events for node master03
Mar  6 03:45:43.626: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:45:43.637: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:45:43.637: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:45:43.637: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:45:43.637: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:45:43.637: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:45:43.637: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:45:43.637: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:45:43.637: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:45:43.637: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:45:43.637: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:45:43.637: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.637: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:45:43.640091      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:45:43.658: INFO: 
Latency metrics for node master03
Mar  6 03:45:43.658: INFO: 
Logging node info for node worker01
Mar  6 03:45:43.660: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 26350 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:45:43.661: INFO: 
Logging kubelet events for node worker01
Mar  6 03:45:43.662: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:45:43.672: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:45:43.672: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:45:43.672: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:45:43.672: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:45:43.672: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:45:43.672: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:45:43.672: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:45:43.672: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:45:43.672: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:45:43.672: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:45:43.672: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:45:43.672: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:45:43.672: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:45:43.672: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:45:43.672: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:45:43.675412      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:45:43.694: INFO: 
Latency metrics for node worker01
Mar  6 03:45:43.694: INFO: 
Logging node info for node worker02
Mar  6 03:45:43.696: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 26279 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:44:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:44:55 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:45:43.696: INFO: 
Logging kubelet events for node worker02
Mar  6 03:45:43.698: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:45:43.702: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:45:43.702: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:45:43.702: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:45:43.702: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:45:43.702: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:45:43.702: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:45:43.702: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:45:43.702: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:45:43.702: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:45:43.702: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:45:43.702: INFO: sample-webhook-deployment-5f65f8c764-cw6kj started at 2020-03-06 03:44:48 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:45:43.702: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 03:45:43.704698      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:45:43.727: INFO: 
Latency metrics for node worker02
Mar  6 03:45:43.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2757" for this suite.
STEP: Destroying namespace "webhook-2757-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [55.997 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:45:43.532: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:911
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":195,"skipped":3576,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:45:43.785: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1052
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-fba62bd6-9708-45ab-8d21-4383b68afca8
STEP: Creating a pod to test consume configMaps
Mar  6 03:45:44.024: INFO: Waiting up to 5m0s for pod "pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823" in namespace "configmap-1052" to be "success or failure"
Mar  6 03:45:44.026: INFO: Pod "pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024298ms
Mar  6 03:45:46.029: INFO: Pod "pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004503335s
STEP: Saw pod success
Mar  6 03:45:46.029: INFO: Pod "pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823" satisfied condition "success or failure"
Mar  6 03:45:46.031: INFO: Trying to get logs from node worker02 pod pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823 container configmap-volume-test: 
STEP: delete the pod
Mar  6 03:45:46.061: INFO: Waiting for pod pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823 to disappear
Mar  6 03:45:46.063: INFO: Pod pod-configmaps-5acc1a41-8259-4197-85cc-5e0011052823 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:45:46.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1052" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3579,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:45:46.070: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-6656
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-gfzf
STEP: Creating a pod to test atomic-volume-subpath
Mar  6 03:45:46.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gfzf" in namespace "subpath-6656" to be "success or failure"
Mar  6 03:45:46.219: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.518225ms
Mar  6 03:45:48.221: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 2.006008463s
Mar  6 03:45:50.224: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 4.008726395s
Mar  6 03:45:52.226: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 6.011250002s
Mar  6 03:45:54.229: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 8.013787596s
Mar  6 03:45:56.232: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 10.016428796s
Mar  6 03:45:58.234: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 12.019121407s
Mar  6 03:46:00.237: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 14.021320044s
Mar  6 03:46:02.239: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 16.023815308s
Mar  6 03:46:04.250: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 18.034474923s
Mar  6 03:46:06.252: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Running", Reason="", readiness=true. Elapsed: 20.036817208s
Mar  6 03:46:08.254: INFO: Pod "pod-subpath-test-configmap-gfzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.039284573s
STEP: Saw pod success
Mar  6 03:46:08.254: INFO: Pod "pod-subpath-test-configmap-gfzf" satisfied condition "success or failure"
Mar  6 03:46:08.257: INFO: Trying to get logs from node worker02 pod pod-subpath-test-configmap-gfzf container test-container-subpath-configmap-gfzf: 
STEP: delete the pod
Mar  6 03:46:08.273: INFO: Waiting for pod pod-subpath-test-configmap-gfzf to disappear
Mar  6 03:46:08.277: INFO: Pod pod-subpath-test-configmap-gfzf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gfzf
Mar  6 03:46:08.277: INFO: Deleting pod "pod-subpath-test-configmap-gfzf" in namespace "subpath-6656"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:46:08.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6656" for this suite.

• [SLOW TEST:22.220 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":197,"skipped":3584,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:46:08.290: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-7485
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
Mar  6 03:46:14.446: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:46:14.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0306 03:46:14.446287      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-7485" for this suite.

• [SLOW TEST:6.165 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":198,"skipped":3609,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:46:14.455: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2750
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar  6 03:46:17.111: INFO: Successfully updated pod "pod-update-activedeadlineseconds-41a401c6-df51-41cf-b9f7-5cf4c133320e"
Mar  6 03:46:17.111: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-41a401c6-df51-41cf-b9f7-5cf4c133320e" in namespace "pods-2750" to be "terminated due to deadline exceeded"
Mar  6 03:46:17.116: INFO: Pod "pod-update-activedeadlineseconds-41a401c6-df51-41cf-b9f7-5cf4c133320e": Phase="Running", Reason="", readiness=true. Elapsed: 4.950917ms
Mar  6 03:46:19.118: INFO: Pod "pod-update-activedeadlineseconds-41a401c6-df51-41cf-b9f7-5cf4c133320e": Phase="Running", Reason="", readiness=true. Elapsed: 2.007178904s
Mar  6 03:46:21.120: INFO: Pod "pod-update-activedeadlineseconds-41a401c6-df51-41cf-b9f7-5cf4c133320e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.009844592s
Mar  6 03:46:21.120: INFO: Pod "pod-update-activedeadlineseconds-41a401c6-df51-41cf-b9f7-5cf4c133320e" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:46:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2750" for this suite.

• [SLOW TEST:6.672 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3613,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:46:21.127: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename dns
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in dns-3605
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3605.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3605.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3605.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3605.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3605.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar  6 03:46:25.285: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:25.292: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:25.294: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:25.310: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:25.314: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:25.319: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:25.323: INFO: Lookups using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3605.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3605.svc.cluster.local]

Mar  6 03:46:30.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:30.329: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:30.339: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:30.342: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:30.350: INFO: Lookups using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local]

Mar  6 03:46:35.327: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:35.329: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:35.339: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:35.341: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:35.349: INFO: Lookups using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local]

Mar  6 03:46:40.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:40.329: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:40.339: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:40.341: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:40.350: INFO: Lookups using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local]

Mar  6 03:46:45.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:45.329: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:45.342: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:45.344: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:45.352: INFO: Lookups using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local]

Mar  6 03:46:50.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:50.328: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:50.339: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:50.341: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local from pod dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878: the server could not find the requested resource (get pods dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878)
Mar  6 03:46:50.351: INFO: Lookups using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3605.svc.cluster.local]

Mar  6 03:46:55.349: INFO: DNS probes using dns-3605/dns-test-0b8bfb3d-7165-4e80-a73f-b71f45387878 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:46:55.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3605" for this suite.

• [SLOW TEST:34.271 seconds]
[sig-network] DNS
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":200,"skipped":3629,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:46:55.398: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename init-container
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in init-container-4416
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Mar  6 03:46:55.535: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:46:58.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4416" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":201,"skipped":3634,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:46:58.643: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-2351
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:46:58.787: INFO: Create a RollingUpdate DaemonSet
Mar  6 03:46:58.790: INFO: Check that daemon pods launch on every node of the cluster
Mar  6 03:46:58.793: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:46:58.793: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:46:58.793: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:46:58.795: INFO: Number of nodes with available pods: 0
Mar  6 03:46:58.795: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:46:59.798: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:46:59.798: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:46:59.798: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:46:59.803: INFO: Number of nodes with available pods: 0
Mar  6 03:46:59.803: INFO: Node worker01 is running more than one daemon pod
Mar  6 03:47:00.798: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:00.798: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:00.798: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:00.800: INFO: Number of nodes with available pods: 2
Mar  6 03:47:00.800: INFO: Number of running nodes: 2, number of available pods: 2
Mar  6 03:47:00.800: INFO: Update the DaemonSet to trigger a rollout
Mar  6 03:47:00.808: INFO: Updating DaemonSet daemon-set
Mar  6 03:47:15.821: INFO: Roll back the DaemonSet before rollout is complete
Mar  6 03:47:15.826: INFO: Updating DaemonSet daemon-set
Mar  6 03:47:15.826: INFO: Make sure DaemonSet rollback is complete
Mar  6 03:47:15.830: INFO: Wrong image for pod: daemon-set-n6t8w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Mar  6 03:47:15.830: INFO: Pod daemon-set-n6t8w is not available
Mar  6 03:47:15.833: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:15.833: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:15.833: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:16.835: INFO: Wrong image for pod: daemon-set-n6t8w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Mar  6 03:47:16.835: INFO: Pod daemon-set-n6t8w is not available
Mar  6 03:47:16.838: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:16.838: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:16.838: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:17.841: INFO: Wrong image for pod: daemon-set-n6t8w. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Mar  6 03:47:17.841: INFO: Pod daemon-set-n6t8w is not available
Mar  6 03:47:17.846: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:17.846: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:17.846: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:18.835: INFO: Pod daemon-set-zw8rs is not available
Mar  6 03:47:18.840: INFO: DaemonSet pods can't tolerate node master01 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:18.840: INFO: DaemonSet pods can't tolerate node master02 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Mar  6 03:47:18.840: INFO: DaemonSet pods can't tolerate node master03 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2351, will wait for the garbage collector to delete the pods
Mar  6 03:47:18.904: INFO: Deleting DaemonSet.extensions daemon-set took: 6.082529ms
Mar  6 03:47:19.404: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.11736ms
Mar  6 03:47:25.406: INFO: Number of nodes with available pods: 0
Mar  6 03:47:25.406: INFO: Number of running nodes: 0, number of available pods: 0
Mar  6 03:47:25.408: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2351/daemonsets","resourceVersion":"27275"},"items":null}

Mar  6 03:47:25.411: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2351/pods","resourceVersion":"27275"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:47:25.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2351" for this suite.

• [SLOW TEST:26.787 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":202,"skipped":3653,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:47:25.431: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-3266
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Mar  6 03:47:25.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 api-versions'
Mar  6 03:47:25.626: INFO: stderr: ""
Mar  6 03:47:25.626: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\ncertificates.k8s.io/v1beta1\ncontour.heptio.com/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nprojectcontour.io/v1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:47:25.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3266" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":203,"skipped":3653,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:47:25.633: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-362
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Mar  6 03:47:25.762: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:47:53.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-362" for this suite.

• [SLOW TEST:28.260 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":204,"skipped":3659,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:47:53.893: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-probe
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-probe-2700
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-d81b4ae1-4d49-414b-a784-ee5529f3d2c5 in namespace container-probe-2700
Mar  6 03:47:56.037: INFO: Started pod busybox-d81b4ae1-4d49-414b-a784-ee5529f3d2c5 in namespace container-probe-2700
STEP: checking the pod's current state and verifying that restartCount is present
Mar  6 03:47:56.039: INFO: Initial restart count of pod busybox-d81b4ae1-4d49-414b-a784-ee5529f3d2c5 is 0
Mar  6 03:48:48.106: INFO: Restart count of pod container-probe-2700/busybox-d81b4ae1-4d49-414b-a784-ee5529f3d2c5 is now 1 (52.066596402s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:48:48.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2700" for this suite.

• [SLOW TEST:54.237 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3672,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:48:48.130: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-6342
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-bfb1212e-92c4-40e5-b652-7d18ef96f134
STEP: Creating a pod to test consume configMaps
Mar  6 03:48:48.275: INFO: Waiting up to 5m0s for pod "pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9" in namespace "configmap-6342" to be "success or failure"
Mar  6 03:48:48.277: INFO: Pod "pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.828477ms
Mar  6 03:48:50.279: INFO: Pod "pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004107957s
STEP: Saw pod success
Mar  6 03:48:50.279: INFO: Pod "pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9" satisfied condition "success or failure"
Mar  6 03:48:50.282: INFO: Trying to get logs from node worker02 pod pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9 container configmap-volume-test: 
STEP: delete the pod
Mar  6 03:48:50.308: INFO: Waiting for pod pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9 to disappear
Mar  6 03:48:50.311: INFO: Pod pod-configmaps-50191f07-8765-43d2-ae25-67e7ccc1b8a9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:48:50.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6342" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3672,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:48:50.328: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8573
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar  6 03:48:52.980: INFO: Successfully updated pod "pod-update-88f41260-b210-4b5a-a1ea-4b7aa99d3c00"
STEP: verifying the updated pod is in kubernetes
Mar  6 03:48:52.983: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:48:52.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8573" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3674,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:48:52.990: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-7161
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7161
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7161
STEP: creating replication controller externalsvc in namespace services-7161
I0306 03:48:53.175156      19 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7161, replica count: 2
I0306 03:48:56.225393      19 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Mar  6 03:48:56.249: INFO: Creating new exec pod
Mar  6 03:48:58.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-7161 execpodmv6c5 -- /bin/sh -x -c nslookup clusterip-service'
Mar  6 03:48:58.491: INFO: stderr: "+ nslookup clusterip-service\n"
Mar  6 03:48:58.491: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7161.svc.cluster.local\tcanonical name = externalsvc.services-7161.svc.cluster.local.\nName:\texternalsvc.services-7161.svc.cluster.local\nAddress: 10.102.6.146\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7161, will wait for the garbage collector to delete the pods
Mar  6 03:48:58.549: INFO: Deleting ReplicationController externalsvc took: 5.640948ms
Mar  6 03:48:59.050: INFO: Terminating ReplicationController externalsvc pods took: 500.12593ms
Mar  6 03:49:05.275: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:49:05.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7161" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.306 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":208,"skipped":3720,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:49:05.296: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3249
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-2da1aaa9-8e33-4004-9f07-202abc89a6b9
STEP: Creating a pod to test consume secrets
Mar  6 03:49:05.498: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf" in namespace "projected-3249" to be "success or failure"
Mar  6 03:49:05.500: INFO: Pod "pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241836ms
Mar  6 03:49:07.504: INFO: Pod "pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006247617s
STEP: Saw pod success
Mar  6 03:49:07.504: INFO: Pod "pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf" satisfied condition "success or failure"
Mar  6 03:49:07.507: INFO: Trying to get logs from node worker02 pod pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf container projected-secret-volume-test: 
STEP: delete the pod
Mar  6 03:49:07.527: INFO: Waiting for pod pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf to disappear
Mar  6 03:49:07.529: INFO: Pod pod-projected-secrets-d8aebf99-ec17-4c39-add1-8a3905d16fcf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:49:07.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3249" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3747,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:49:07.536: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-2686
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:49:09.695: INFO: Waiting up to 5m0s for pod "client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de" in namespace "pods-2686" to be "success or failure"
Mar  6 03:49:09.699: INFO: Pod "client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912568ms
Mar  6 03:49:11.707: INFO: Pod "client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011705245s
STEP: Saw pod success
Mar  6 03:49:11.707: INFO: Pod "client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de" satisfied condition "success or failure"
Mar  6 03:49:11.711: INFO: Trying to get logs from node worker02 pod client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de container env3cont: 
STEP: delete the pod
Mar  6 03:49:11.725: INFO: Waiting for pod client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de to disappear
Mar  6 03:49:11.727: INFO: Pod client-envvars-ce96f91a-dd94-42df-8f5b-1435f9f468de no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:49:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2686" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3763,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:49:11.734: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-9598
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:49:12.297: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:49:14.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063352, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063352, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063352, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063352, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:49:17.322: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:49:17.325: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3441-crds.webhook.example.com via the AdmissionRegistration API
Mar  6 03:49:22.959: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:49:33.069: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:49:43.168: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:49:53.270: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:50:03.279: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:50:03.279: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-9598".
STEP: Found 6 events.
Mar  6 03:50:03.790: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xw5cl: {default-scheduler } Scheduled: Successfully assigned webhook-9598/sample-webhook-deployment-5f65f8c764-xw5cl to worker02
Mar  6 03:50:03.790: INFO: At 2020-03-06 03:49:12 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:50:03.790: INFO: At 2020-03-06 03:49:12 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-xw5cl
Mar  6 03:50:03.790: INFO: At 2020-03-06 03:49:12 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xw5cl: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:50:03.790: INFO: At 2020-03-06 03:49:12 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xw5cl: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:50:03.790: INFO: At 2020-03-06 03:49:13 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-xw5cl: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:50:03.792: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:50:03.792: INFO: sample-webhook-deployment-5f65f8c764-xw5cl  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:49:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:49:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:49:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:49:12 +0000 UTC  }]
Mar  6 03:50:03.792: INFO: 
Mar  6 03:50:03.795: INFO: 
Logging node info for node master01
Mar  6 03:50:03.797: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 27827 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:50:03.797: INFO: 
Logging kubelet events for node master01
Mar  6 03:50:03.799: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:50:03.813: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.814: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:50:03.814: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:50:03.814: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.814: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:50:03.814: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.814: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:50:03.814: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.814: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:50:03.814: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.814: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:50:03.814: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:50:03.814: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:50:03.814: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:50:03.818000      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:50:03.836: INFO: 
Latency metrics for node master01
Mar  6 03:50:03.836: INFO: 
Logging node info for node master02
Mar  6 03:50:03.838: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 27789 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:50:03.838: INFO: 
Logging kubelet events for node master02
Mar  6 03:50:03.840: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:50:03.849: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.849: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:50:03.849: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:50:03.849: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:50:03.849: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:50:03.849: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.849: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:50:03.849: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.849: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:50:03.849: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.849: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:50:03.849: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.849: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:50:03.849: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.850: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:50:03.850: INFO: 	Container kube-flannel ready: true, restart count 0
W0306 03:50:03.852527      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:50:03.868: INFO: 
Latency metrics for node master02
Mar  6 03:50:03.868: INFO: 
Logging node info for node master03
Mar  6 03:50:03.870: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 27795 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:50:03.870: INFO: 
Logging kubelet events for node master03
Mar  6 03:50:03.871: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:50:03.881: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:50:03.881: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:50:03.881: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:50:03.881: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 03:50:03.881: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:50:03.881: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:50:03.881: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:50:03.881: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:50:03.881: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:50:03.881: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:50:03.881: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.881: INFO: 	Container kube-proxy ready: true, restart count 0
W0306 03:50:03.884108      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:50:03.900: INFO: 
Latency metrics for node master03
Mar  6 03:50:03.900: INFO: 
Logging node info for node worker01
Mar  6 03:50:03.902: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 26350 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:45:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:50:03.902: INFO: 
Logging kubelet events for node worker01
Mar  6 03:50:03.907: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:50:03.919: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:50:03.919: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:50:03.919: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:50:03.919: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:50:03.919: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:50:03.919: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:50:03.919: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:50:03.919: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:50:03.919: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:50:03.919: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:50:03.919: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:50:03.919: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:50:03.919: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:50:03.919: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.919: INFO: 	Container kuard ready: true, restart count 0
W0306 03:50:03.922365      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:50:03.938: INFO: 
Latency metrics for node worker01
Mar  6 03:50:03.938: INFO: 
Logging node info for node worker02
Mar  6 03:50:03.940: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 28128 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:50:03.940: INFO: 
Logging kubelet events for node worker02
Mar  6 03:50:03.942: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:50:03.946: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:50:03.946: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:50:03.946: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:50:03.946: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:50:03.946: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:50:03.946: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:50:03.946: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:50:03.946: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:50:03.946: INFO: sample-webhook-deployment-5f65f8c764-xw5cl started at 2020-03-06 03:49:12 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 03:50:03.946: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:50:03.946: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:50:03.946: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 03:50:03.949287      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:50:03.976: INFO: 
Latency metrics for node worker02
Mar  6 03:50:03.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9598" for this suite.
STEP: Destroying namespace "webhook-9598-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [52.308 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:50:03.279: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":210,"skipped":3793,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:50:04.041: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-8097
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:50:04.184: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001" in namespace "downward-api-8097" to be "success or failure"
Mar  6 03:50:04.186: INFO: Pod "downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335984ms
Mar  6 03:50:06.189: INFO: Pod "downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004885791s
STEP: Saw pod success
Mar  6 03:50:06.189: INFO: Pod "downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001" satisfied condition "success or failure"
Mar  6 03:50:06.192: INFO: Trying to get logs from node worker02 pod downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001 container client-container: 
STEP: delete the pod
Mar  6 03:50:06.211: INFO: Waiting for pod downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001 to disappear
Mar  6 03:50:06.232: INFO: Pod downwardapi-volume-d5efb395-ea6f-4aed-a3cf-3471acac4001 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:50:06.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8097" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3793,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:50:06.246: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-8780
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Mar  6 03:50:06.607: INFO: Pod name wrapped-volume-race-6e6c41fe-d8a7-4e0b-9261-8ca7522dea0c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6e6c41fe-d8a7-4e0b-9261-8ca7522dea0c in namespace emptydir-wrapper-8780, will wait for the garbage collector to delete the pods
Mar  6 03:50:22.721: INFO: Deleting ReplicationController wrapped-volume-race-6e6c41fe-d8a7-4e0b-9261-8ca7522dea0c took: 7.445533ms
Mar  6 03:50:23.221: INFO: Terminating ReplicationController wrapped-volume-race-6e6c41fe-d8a7-4e0b-9261-8ca7522dea0c pods took: 500.129831ms
STEP: Creating RC which spawns configmap-volume pods
Mar  6 03:50:35.231: INFO: Pod name wrapped-volume-race-b38baa36-be59-4c87-8ac4-fba292331a35: Found 0 pods out of 5
Mar  6 03:50:40.235: INFO: Pod name wrapped-volume-race-b38baa36-be59-4c87-8ac4-fba292331a35: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b38baa36-be59-4c87-8ac4-fba292331a35 in namespace emptydir-wrapper-8780, will wait for the garbage collector to delete the pods
Mar  6 03:50:50.309: INFO: Deleting ReplicationController wrapped-volume-race-b38baa36-be59-4c87-8ac4-fba292331a35 took: 6.750366ms
Mar  6 03:50:50.809: INFO: Terminating ReplicationController wrapped-volume-race-b38baa36-be59-4c87-8ac4-fba292331a35 pods took: 500.141654ms
STEP: Creating RC which spawns configmap-volume pods
Mar  6 03:50:57.525: INFO: Pod name wrapped-volume-race-763505b1-4738-4b57-beeb-b28a2319ec16: Found 0 pods out of 5
Mar  6 03:51:02.529: INFO: Pod name wrapped-volume-race-763505b1-4738-4b57-beeb-b28a2319ec16: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-763505b1-4738-4b57-beeb-b28a2319ec16 in namespace emptydir-wrapper-8780, will wait for the garbage collector to delete the pods
Mar  6 03:51:14.608: INFO: Deleting ReplicationController wrapped-volume-race-763505b1-4738-4b57-beeb-b28a2319ec16 took: 10.045906ms
Mar  6 03:51:15.108: INFO: Terminating ReplicationController wrapped-volume-race-763505b1-4738-4b57-beeb-b28a2319ec16 pods took: 500.234427ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:51:25.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8780" for this suite.

• [SLOW TEST:79.235 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":212,"skipped":3835,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:51:25.481: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename tables
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in tables-8525
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:51:25.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8525" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":213,"skipped":3877,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:51:25.624: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename subpath
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in subpath-5400
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-rmmd
STEP: Creating a pod to test atomic-volume-subpath
Mar  6 03:51:25.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rmmd" in namespace "subpath-5400" to be "success or failure"
Mar  6 03:51:25.764: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507877ms
Mar  6 03:51:27.766: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 2.004825104s
Mar  6 03:51:29.769: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 4.007449868s
Mar  6 03:51:31.771: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 6.009860955s
Mar  6 03:51:33.774: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 8.012056193s
Mar  6 03:51:35.777: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 10.014925862s
Mar  6 03:51:37.779: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 12.017446681s
Mar  6 03:51:39.781: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 14.019614179s
Mar  6 03:51:41.784: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 16.02204352s
Mar  6 03:51:43.786: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 18.024527624s
Mar  6 03:51:45.789: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Running", Reason="", readiness=true. Elapsed: 20.0271292s
Mar  6 03:51:47.791: INFO: Pod "pod-subpath-test-projected-rmmd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.029576162s
STEP: Saw pod success
Mar  6 03:51:47.791: INFO: Pod "pod-subpath-test-projected-rmmd" satisfied condition "success or failure"
Mar  6 03:51:47.793: INFO: Trying to get logs from node worker02 pod pod-subpath-test-projected-rmmd container test-container-subpath-projected-rmmd: 
STEP: delete the pod
Mar  6 03:51:47.821: INFO: Waiting for pod pod-subpath-test-projected-rmmd to disappear
Mar  6 03:51:47.823: INFO: Pod pod-subpath-test-projected-rmmd no longer exists
STEP: Deleting pod pod-subpath-test-projected-rmmd
Mar  6 03:51:47.823: INFO: Deleting pod "pod-subpath-test-projected-rmmd" in namespace "subpath-5400"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:51:47.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5400" for this suite.

• [SLOW TEST:22.206 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":214,"skipped":3882,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:51:47.830: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename var-expansion
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in var-expansion-4362
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Mar  6 03:51:47.964: INFO: Waiting up to 5m0s for pod "var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34" in namespace "var-expansion-4362" to be "success or failure"
Mar  6 03:51:47.965: INFO: Pod "var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34": Phase="Pending", Reason="", readiness=false. Elapsed: 1.913346ms
Mar  6 03:51:49.968: INFO: Pod "var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004446871s
STEP: Saw pod success
Mar  6 03:51:49.968: INFO: Pod "var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34" satisfied condition "success or failure"
Mar  6 03:51:49.970: INFO: Trying to get logs from node worker02 pod var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34 container dapi-container: 
STEP: delete the pod
Mar  6 03:51:49.984: INFO: Waiting for pod var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34 to disappear
Mar  6 03:51:49.985: INFO: Pod var-expansion-4a22317d-547a-4899-a7c5-b5711ec42a34 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:51:49.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4362" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3890,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:51:49.992: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename svcaccounts
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in svcaccounts-6347
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Mar  6 03:51:52.656: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6347 pod-service-account-b6aa5768-8417-4e9b-b586-6ccdad7ecc7d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Mar  6 03:51:52.989: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6347 pod-service-account-b6aa5768-8417-4e9b-b586-6ccdad7ecc7d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Mar  6 03:51:53.192: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6347 pod-service-account-b6aa5768-8417-4e9b-b586-6ccdad7ecc7d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:51:53.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6347" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":216,"skipped":3893,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:51:53.422: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5823
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:51:54.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:51:56.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063514, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063514, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063514, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063514, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:51:59.149: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
Mar  6 03:52:33.196: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.StatusError | 0xc0001823c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Timeout: request did not complete within requested timeout",
            Reason: "Timeout",
            Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 504,
        },
    }
    Timeout: request did not complete within requested timeout
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-5823".
STEP: Found 6 events.
Mar  6 03:52:33.200: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-g7hc7: {default-scheduler } Scheduled: Successfully assigned webhook-5823/sample-webhook-deployment-5f65f8c764-g7hc7 to worker02
Mar  6 03:52:33.200: INFO: At 2020-03-06 03:51:54 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:52:33.200: INFO: At 2020-03-06 03:51:54 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-g7hc7
Mar  6 03:52:33.200: INFO: At 2020-03-06 03:51:54 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-g7hc7: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:52:33.200: INFO: At 2020-03-06 03:51:54 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-g7hc7: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:52:33.200: INFO: At 2020-03-06 03:51:54 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-g7hc7: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:52:33.203: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:52:33.203: INFO: sample-webhook-deployment-5f65f8c764-g7hc7  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:51:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:51:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:51:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:51:54 +0000 UTC  }]
Mar  6 03:52:33.203: INFO: 
Mar  6 03:52:33.206: INFO: 
Logging node info for node master01
Mar  6 03:52:33.208: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 27827 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:05 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:52:33.209: INFO: 
Logging kubelet events for node master01
Mar  6 03:52:33.215: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:52:33.226: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.226: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:52:33.226: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:52:33.226: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.226: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:52:33.226: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.226: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:52:33.226: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.226: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:52:33.226: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.226: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:52:33.226: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:52:33.226: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:52:33.226: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:52:33.228920      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:52:33.243: INFO: 
Latency metrics for node master01
Mar  6 03:52:33.243: INFO: 
Logging node info for node master02
Mar  6 03:52:33.245: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 27789 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:01 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:52:33.245: INFO: 
Logging kubelet events for node master02
Mar  6 03:52:33.247: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:52:33.258: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:52:33.258: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:52:33.258: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:52:33.258: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:52:33.258: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:52:33.258: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:52:33.258: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:52:33.258: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:52:33.258: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:52:33.258: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:52:33.260732      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:52:33.277: INFO: 
Latency metrics for node master02
Mar  6 03:52:33.277: INFO: 
Logging node info for node master03
Mar  6 03:52:33.279: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 27795 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:02 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:52:33.279: INFO: 
Logging kubelet events for node master03
Mar  6 03:52:33.281: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:52:33.292: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:52:33.292: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:52:33.292: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:52:33.292: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:52:33.292: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:52:33.292: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:52:33.292: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:52:33.292: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:52:33.292: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:52:33.292: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:52:33.292: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.292: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:52:33.294890      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:52:33.312: INFO: 
Latency metrics for node master03
Mar  6 03:52:33.312: INFO: 
Logging node info for node worker01
Mar  6 03:52:33.318: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 28397 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:52:33.318: INFO: 
Logging kubelet events for node worker01
Mar  6 03:52:33.321: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:52:33.331: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:52:33.331: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:52:33.331: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:52:33.331: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:52:33.331: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:52:33.331: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:52:33.331: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:52:33.331: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:52:33.331: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:52:33.331: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:52:33.331: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:52:33.331: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:52:33.331: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:52:33.331: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.331: INFO: 	Container metrics-server ready: true, restart count 0
W0306 03:52:33.333828      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:52:33.353: INFO: 
Latency metrics for node worker01
Mar  6 03:52:33.353: INFO: 
Logging node info for node worker02
Mar  6 03:52:33.355: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 28128 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:49:56 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:52:33.355: INFO: 
Logging kubelet events for node worker02
Mar  6 03:52:33.357: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:52:33.361: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:52:33.361: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:52:33.361: INFO: sample-webhook-deployment-5f65f8c764-g7hc7 started at 2020-03-06 03:51:54 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 03:52:33.361: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:52:33.361: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:52:33.361: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:52:33.361: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:52:33.361: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:52:33.361: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:52:33.361: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:52:33.361: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:52:33.361: INFO: 	Container kube-sonobuoy ready: true, restart count 0
W0306 03:52:33.364283      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:52:33.385: INFO: 
Latency metrics for node worker02
Mar  6 03:52:33.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5823" for this suite.
STEP: Destroying namespace "webhook-5823-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [40.040 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:52:33.196: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.StatusError | 0xc0001823c0>: {
          ErrStatus: {
              TypeMeta: {Kind: "", APIVersion: ""},
              ListMeta: {
                  SelfLink: "",
                  ResourceVersion: "",
                  Continue: "",
                  RemainingItemCount: nil,
              },
              Status: "Failure",
              Message: "Timeout: request did not complete within requested timeout",
              Reason: "Timeout",
              Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0},
              Code: 504,
          },
      }
      Timeout: request did not complete within requested timeout
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:682
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":216,"skipped":3936,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:52:33.462: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename statefulset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in statefulset-3429
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3429
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-3429
Mar  6 03:52:33.610: INFO: Found 0 stateful pods, waiting for 1
Mar  6 03:52:43.613: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Mar  6 03:52:43.627: INFO: Deleting all statefulset in ns statefulset-3429
Mar  6 03:52:43.630: INFO: Scaling statefulset ss to 0
Mar  6 03:53:03.649: INFO: Waiting for statefulset status.replicas updated to 0
Mar  6 03:53:03.651: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:53:03.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3429" for this suite.

• [SLOW TEST:30.205 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":217,"skipped":3940,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:53:03.667: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-1611
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
Mar  6 03:53:33.830: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:53:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0306 03:53:33.830365      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-1611" for this suite.

• [SLOW TEST:30.172 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":218,"skipped":3958,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:53:33.839: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-8456
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:53:40.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8456" for this suite.

• [SLOW TEST:7.149 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":219,"skipped":3987,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:53:40.988: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename security-context-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in security-context-test-673
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:53:41.132: INFO: Waiting up to 5m0s for pod "busybox-user-65534-2a3e090f-9b63-46a0-98f5-082e2d76862d" in namespace "security-context-test-673" to be "success or failure"
Mar  6 03:53:41.135: INFO: Pod "busybox-user-65534-2a3e090f-9b63-46a0-98f5-082e2d76862d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.59294ms
Mar  6 03:53:43.137: INFO: Pod "busybox-user-65534-2a3e090f-9b63-46a0-98f5-082e2d76862d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004930166s
Mar  6 03:53:43.137: INFO: Pod "busybox-user-65534-2a3e090f-9b63-46a0-98f5-082e2d76862d" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:53:43.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-673" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3987,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:53:43.144: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-8096
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
Mar  6 03:53:53.296: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:53:53.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0306 03:53:53.296329      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-8096" for this suite.

• [SLOW TEST:10.160 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":221,"skipped":4041,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:53:53.305: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-9073
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:53:53.444: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Mar  6 03:53:53.451: INFO: Pod name sample-pod: Found 0 pods out of 1
Mar  6 03:53:58.454: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar  6 03:53:58.454: INFO: Creating deployment "test-rolling-update-deployment"
Mar  6 03:53:58.458: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Mar  6 03:53:58.463: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Mar  6 03:54:00.471: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Mar  6 03:54:00.474: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar  6 03:54:00.488: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-9073 /apis/apps/v1/namespaces/deployment-9073/deployments/test-rolling-update-deployment c79b960c-1d96-4569-ac98-14285327a134 30245 1 2020-03-06 03:53:58 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045f4c18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-06 03:53:58 +0000 UTC,LastTransitionTime:2020-03-06 03:53:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-06 03:54:00 +0000 UTC,LastTransitionTime:2020-03-06 03:53:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Mar  6 03:54:00.491: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-9073 /apis/apps/v1/namespaces/deployment-9073/replicasets/test-rolling-update-deployment-67cf4f6444 402c4084-3d47-4953-9091-b10bee8baedd 30233 1 2020-03-06 03:53:58 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c79b960c-1d96-4569-ac98-14285327a134 0xc00464cc37 0xc00464cc38}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00464cca8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:54:00.491: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Mar  6 03:54:00.491: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-9073 /apis/apps/v1/namespaces/deployment-9073/replicasets/test-rolling-update-controller c998c964-8c83-4e12-ab1d-6cc30c5b50ce 30244 2 2020-03-06 03:53:53 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c79b960c-1d96-4569-ac98-14285327a134 0xc00464cb57 0xc00464cb58}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00464cbc8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:54:00.494: INFO: Pod "test-rolling-update-deployment-67cf4f6444-4nsb2" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-4nsb2 test-rolling-update-deployment-67cf4f6444- deployment-9073 /api/v1/namespaces/deployment-9073/pods/test-rolling-update-deployment-67cf4f6444-4nsb2 007bb16a-bc5d-494e-833a-daa3c090843d 30232 0 2020-03-06 03:53:58 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 402c4084-3d47-4953-9091-b10bee8baedd 0xc0045f5007 0xc0045f5008}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ttgmv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ttgmv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ttgmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:53:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:54:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:54:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:53:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.38,StartTime:2020-03-06 03:53:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:53:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://92db0e132271795417b4e4d504a998021d81128d4ed81c903576221a9a5a83e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:54:00.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9073" for this suite.

• [SLOW TEST:7.197 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":222,"skipped":4084,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:54:00.502: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubelet-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-5889
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:54:00.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5889" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":4101,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:54:00.661: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-615
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Update Demo
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Mar  6 03:54:00.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 create -f - --namespace=kubectl-615'
Mar  6 03:54:00.977: INFO: stderr: ""
Mar  6 03:54:00.977: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar  6 03:54:00.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-615'
Mar  6 03:54:01.047: INFO: stderr: ""
Mar  6 03:54:01.047: INFO: stdout: "update-demo-nautilus-szpkt update-demo-nautilus-vqz2v "
Mar  6 03:54:01.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-szpkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-615'
Mar  6 03:54:01.112: INFO: stderr: ""
Mar  6 03:54:01.112: INFO: stdout: ""
Mar  6 03:54:01.112: INFO: update-demo-nautilus-szpkt is created but not running
Mar  6 03:54:06.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-615'
Mar  6 03:54:06.176: INFO: stderr: ""
Mar  6 03:54:06.176: INFO: stdout: "update-demo-nautilus-szpkt update-demo-nautilus-vqz2v "
Mar  6 03:54:06.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-szpkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-615'
Mar  6 03:54:06.242: INFO: stderr: ""
Mar  6 03:54:06.242: INFO: stdout: "true"
Mar  6 03:54:06.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-szpkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-615'
Mar  6 03:54:06.307: INFO: stderr: ""
Mar  6 03:54:06.307: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:54:06.307: INFO: validating pod update-demo-nautilus-szpkt
Mar  6 03:54:06.312: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:54:06.312: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:54:06.312: INFO: update-demo-nautilus-szpkt is verified up and running
Mar  6 03:54:06.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-vqz2v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-615'
Mar  6 03:54:06.375: INFO: stderr: ""
Mar  6 03:54:06.375: INFO: stdout: "true"
Mar  6 03:54:06.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods update-demo-nautilus-vqz2v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-615'
Mar  6 03:54:06.437: INFO: stderr: ""
Mar  6 03:54:06.437: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar  6 03:54:06.437: INFO: validating pod update-demo-nautilus-vqz2v
Mar  6 03:54:06.440: INFO: got data: {
  "image": "nautilus.jpg"
}

Mar  6 03:54:06.440: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar  6 03:54:06.440: INFO: update-demo-nautilus-vqz2v is verified up and running
STEP: using delete to clean up resources
Mar  6 03:54:06.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete --grace-period=0 --force -f - --namespace=kubectl-615'
Mar  6 03:54:06.507: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar  6 03:54:06.507: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar  6 03:54:06.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-615'
Mar  6 03:54:06.603: INFO: stderr: "No resources found in kubectl-615 namespace.\n"
Mar  6 03:54:06.603: INFO: stdout: ""
Mar  6 03:54:06.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pods -l name=update-demo --namespace=kubectl-615 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar  6 03:54:06.685: INFO: stderr: ""
Mar  6 03:54:06.685: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:54:06.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-615" for this suite.

• [SLOW TEST:6.032 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":224,"skipped":4109,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:54:06.692: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-9741
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:54:06.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca" in namespace "downward-api-9741" to be "success or failure"
Mar  6 03:54:06.834: INFO: Pod "downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014011ms
Mar  6 03:54:08.837: INFO: Pod "downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004750794s
STEP: Saw pod success
Mar  6 03:54:08.837: INFO: Pod "downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca" satisfied condition "success or failure"
Mar  6 03:54:08.839: INFO: Trying to get logs from node worker02 pod downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca container client-container: 
STEP: delete the pod
Mar  6 03:54:08.864: INFO: Waiting for pod downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca to disappear
Mar  6 03:54:08.871: INFO: Pod downwardapi-volume-640ed9e0-c6a9-4ba9-ab54-bb8f7edbd8ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:54:08.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9741" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":4109,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:54:08.880: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-4573
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Mar  6 03:54:09.019: INFO: Waiting up to 5m0s for pod "downward-api-121a39a4-b068-4e2f-9166-c58247718409" in namespace "downward-api-4573" to be "success or failure"
Mar  6 03:54:09.021: INFO: Pod "downward-api-121a39a4-b068-4e2f-9166-c58247718409": Phase="Pending", Reason="", readiness=false. Elapsed: 1.908098ms
Mar  6 03:54:11.023: INFO: Pod "downward-api-121a39a4-b068-4e2f-9166-c58247718409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004459616s
STEP: Saw pod success
Mar  6 03:54:11.023: INFO: Pod "downward-api-121a39a4-b068-4e2f-9166-c58247718409" satisfied condition "success or failure"
Mar  6 03:54:11.025: INFO: Trying to get logs from node worker02 pod downward-api-121a39a4-b068-4e2f-9166-c58247718409 container dapi-container: 
STEP: delete the pod
Mar  6 03:54:11.039: INFO: Waiting for pod downward-api-121a39a4-b068-4e2f-9166-c58247718409 to disappear
Mar  6 03:54:11.042: INFO: Pod downward-api-121a39a4-b068-4e2f-9166-c58247718409 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:54:11.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4573" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":4113,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:54:11.048: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename watch
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in watch-4905
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Mar  6 03:54:11.184: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4905 /api/v1/namespaces/watch-4905/configmaps/e2e-watch-test-watch-closed 0381c518-eb87-40ad-ab6b-6d6e671b2162 30405 0 2020-03-06 03:54:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Mar  6 03:54:11.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4905 /api/v1/namespaces/watch-4905/configmaps/e2e-watch-test-watch-closed 0381c518-eb87-40ad-ab6b-6d6e671b2162 30406 0 2020-03-06 03:54:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Mar  6 03:54:11.195: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4905 /api/v1/namespaces/watch-4905/configmaps/e2e-watch-test-watch-closed 0381c518-eb87-40ad-ab6b-6d6e671b2162 30407 0 2020-03-06 03:54:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Mar  6 03:54:11.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4905 /api/v1/namespaces/watch-4905/configmaps/e2e-watch-test-watch-closed 0381c518-eb87-40ad-ab6b-6d6e671b2162 30408 0 2020-03-06 03:54:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:54:11.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4905" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":227,"skipped":4113,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:54:11.205: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in crd-publish-openapi-575
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Mar  6 03:54:11.340: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: rename a version
STEP: check the new version name is served
Mar  6 03:55:11.465: FAIL: failed to wait for definition "com.example.crd-publish-openapi-test-multi-ver.v4.E2e-test-crd-publish-openapi-4948-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: timed out waiting for the condition; lastMsg: spec.SwaggerProps.Definitions["com.example.crd-publish-openapi-test-multi-ver.v4.E2e-test-crd-publish-openapi-4948-crd"] not found

Full Stack Trace
k8s.io/kubernetes/test/e2e/apimachinery.glob..func4.8()
	/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:402 +0x5d7
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00226f200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc00226f200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc00226f200, 0x4c77cd8)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "crd-publish-openapi-575".
STEP: Found 0 events.
Mar  6 03:55:11.471: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar  6 03:55:11.471: INFO: 
Mar  6 03:55:11.473: INFO: 
Logging node info for node master01
Mar  6 03:55:11.475: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 30340 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:55:11.475: INFO: 
Logging kubelet events for node master01
Mar  6 03:55:11.478: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:55:11.487: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.487: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:55:11.487: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.487: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:55:11.487: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.487: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:55:11.487: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.487: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:55:11.487: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:55:11.487: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:55:11.487: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:55:11.487: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.487: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:55:11.487: INFO: 	Container kube-flannel ready: true, restart count 0
W0306 03:55:11.489735      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:55:11.503: INFO: 
Latency metrics for node master01
Mar  6 03:55:11.503: INFO: 
Logging node info for node master02
Mar  6 03:55:11.505: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 30296 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:55:11.505: INFO: 
Logging kubelet events for node master02
Mar  6 03:55:11.508: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:55:11.520: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:55:11.520: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:55:11.520: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:55:11.520: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:55:11.520: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:55:11.520: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:55:11.520: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:55:11.520: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:55:11.520: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:55:11.520: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:55:11.523412      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:55:11.539: INFO: 
Latency metrics for node master02
Mar  6 03:55:11.539: INFO: 
Logging node info for node master03
Mar  6 03:55:11.541: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 30299 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:55:11.541: INFO: 
Logging kubelet events for node master03
Mar  6 03:55:11.542: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:55:11.552: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:55:11.552: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:55:11.552: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:55:11.552: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:55:11.552: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:55:11.552: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:55:11.552: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:55:11.552: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:55:11.552: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:55:11.552: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:55:11.552: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.552: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 03:55:11.555244      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:55:11.569: INFO: 
Latency metrics for node master03
Mar  6 03:55:11.569: INFO: 
Logging node info for node worker01
Mar  6 03:55:11.571: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 28397 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:50:17 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:55:11.571: INFO: 
Logging kubelet events for node worker01
Mar  6 03:55:11.573: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:55:11.587: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:55:11.587: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:55:11.587: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:55:11.587: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:55:11.587: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:55:11.587: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:55:11.587: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:55:11.587: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:55:11.587: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:55:11.587: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:55:11.587: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:55:11.587: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:55:11.587: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.587: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:55:11.587: INFO: 	Container kube-flannel ready: true, restart count 1
W0306 03:55:11.590781      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:55:11.622: INFO: 
Latency metrics for node worker01
Mar  6 03:55:11.622: INFO: 
Logging node info for node worker02
Mar  6 03:55:11.627: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 30615 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:55:11.627: INFO: 
Logging kubelet events for node worker02
Mar  6 03:55:11.631: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:55:11.636: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.636: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:55:11.636: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:55:11.636: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:55:11.636: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.636: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:55:11.636: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:55:11.636: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:55:11.636: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:55:11.636: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:55:11.636: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:55:11.636: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:55:11.636: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:55:11.636: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:55:11.636: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:55:11.636: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:55:11.638627      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:55:11.655: INFO: 
Latency metrics for node worker02
Mar  6 03:55:11.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-575" for this suite.

• Failure [60.457 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:55:11.465: failed to wait for definition "com.example.crd-publish-openapi-test-multi-ver.v4.E2e-test-crd-publish-openapi-4948-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: timed out waiting for the condition; lastMsg: spec.SwaggerProps.Definitions["com.example.crd-publish-openapi-test-multi-ver.v4.E2e-test-crd-publish-openapi-4948-crd"] not found

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:402
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":227,"skipped":4139,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:55:11.662: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-588
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 03:55:11.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-588'
Mar  6 03:55:11.876: INFO: stderr: ""
Mar  6 03:55:11.876: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Mar  6 03:55:16.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 get pod e2e-test-httpd-pod --namespace=kubectl-588 -o json'
Mar  6 03:55:16.989: INFO: stderr: ""
Mar  6 03:55:16.989: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-03-06T03:55:11Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-588\",\n        \"resourceVersion\": \"30677\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-588/pods/e2e-test-httpd-pod\",\n        \"uid\": \"2b385007-c52d-488e-8173-b6b6815a3d83\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-7dgp4\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"worker02\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-7dgp4\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-7dgp4\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-03-06T03:55:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-03-06T03:55:13Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-03-06T03:55:13Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-03-06T03:55:11Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://42e52806665feb74a843eba7e77a6c5527bbe0ee73bff8a7ea029d91f9ce17ab\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-03-06T03:55:12Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"192.168.1.251\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.3.42\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.3.42\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-03-06T03:55:11Z\"\n    }\n}\n"
STEP: replace the image in the pod
Mar  6 03:55:16.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 replace -f - --namespace=kubectl-588'
Mar  6 03:55:17.194: INFO: stderr: ""
Mar  6 03:55:17.194: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902
Mar  6 03:55:17.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete pods e2e-test-httpd-pod --namespace=kubectl-588'
Mar  6 03:55:25.152: INFO: stderr: ""
Mar  6 03:55:25.152: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:55:25.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-588" for this suite.

• [SLOW TEST:13.498 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":228,"skipped":4144,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:55:25.160: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename container-runtime
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in container-runtime-1065
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar  6 03:55:27.302: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:55:27.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1065" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":4149,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}

------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:55:27.324: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-7877
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:55:27.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097" in namespace "downward-api-7877" to be "success or failure"
Mar  6 03:55:27.460: INFO: Pod "downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.592497ms
Mar  6 03:55:29.463: INFO: Pod "downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00494058s
STEP: Saw pod success
Mar  6 03:55:29.463: INFO: Pod "downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097" satisfied condition "success or failure"
Mar  6 03:55:29.465: INFO: Trying to get logs from node worker02 pod downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097 container client-container: 
STEP: delete the pod
Mar  6 03:55:29.481: INFO: Waiting for pod downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097 to disappear
Mar  6 03:55:29.485: INFO: Pod downwardapi-volume-27f69630-b21f-4ba6-b7fe-f98385ce1097 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:55:29.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7877" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":4149,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:55:29.492: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename services
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in services-2483
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2483
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-2483
I0306 03:55:29.653444      19 runners.go:189] Created replication controller with name: externalname-service, namespace: services-2483, replica count: 2
Mar  6 03:55:32.704: INFO: Creating new exec pod
I0306 03:55:32.704118      19 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar  6 03:55:35.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2483 execpodppfhn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Mar  6 03:55:35.894: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Mar  6 03:55:35.894: INFO: stdout: ""
Mar  6 03:55:35.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 exec --namespace=services-2483 execpodppfhn -- /bin/sh -x -c nc -zv -t -w 2 10.105.178.42 80'
Mar  6 03:55:36.095: INFO: stderr: "+ nc -zv -t -w 2 10.105.178.42 80\nConnection to 10.105.178.42 80 port [tcp/http] succeeded!\n"
Mar  6 03:55:36.095: INFO: stdout: ""
Mar  6 03:55:36.095: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:55:36.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2483" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:6.644 seconds]
[sig-network] Services
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":231,"skipped":4149,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:55:36.136: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename gc
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in gc-4676
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0306 03:56:16.320148      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:56:16.320: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:56:16.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4676" for this suite.

• [SLOW TEST:40.191 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":232,"skipped":4160,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:56:16.327: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-4956
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-3e6f2dbc-5a16-4efa-bd38-d021e582a6d0
STEP: Creating a pod to test consume configMaps
Mar  6 03:56:16.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9" in namespace "configmap-4956" to be "success or failure"
Mar  6 03:56:16.465: INFO: Pod "pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043502ms
Mar  6 03:56:18.468: INFO: Pod "pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004511965s
STEP: Saw pod success
Mar  6 03:56:18.468: INFO: Pod "pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9" satisfied condition "success or failure"
Mar  6 03:56:18.470: INFO: Trying to get logs from node worker02 pod pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9 container configmap-volume-test: 
STEP: delete the pod
Mar  6 03:56:18.482: INFO: Waiting for pod pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9 to disappear
Mar  6 03:56:18.484: INFO: Pod pod-configmaps-7b534a96-abb3-4e11-8634-a25291890dc9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:56:18.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4956" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":4170,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:56:18.491: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-5125
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 03:56:18.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5125'
Mar  6 03:56:18.699: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar  6 03:56:18.699: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Mar  6 03:56:18.719: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-mtktg]
Mar  6 03:56:18.719: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-mtktg" in namespace "kubectl-5125" to be "running and ready"
Mar  6 03:56:18.727: INFO: Pod "e2e-test-httpd-rc-mtktg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824979ms
Mar  6 03:56:20.730: INFO: Pod "e2e-test-httpd-rc-mtktg": Phase="Running", Reason="", readiness=true. Elapsed: 2.011295905s
Mar  6 03:56:20.730: INFO: Pod "e2e-test-httpd-rc-mtktg" satisfied condition "running and ready"
Mar  6 03:56:20.730: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-mtktg]
Mar  6 03:56:20.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 logs rc/e2e-test-httpd-rc --namespace=kubectl-5125'
Mar  6 03:56:20.816: INFO: stderr: ""
Mar  6 03:56:20.816: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.53. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.53. Set the 'ServerName' directive globally to suppress this message\n[Fri Mar 06 03:56:19.555989 2020] [mpm_event:notice] [pid 1:tid 140082740435816] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri Mar 06 03:56:19.556025 2020] [core:notice] [pid 1:tid 140082740435816] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637
Mar  6 03:56:20.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete rc e2e-test-httpd-rc --namespace=kubectl-5125'
Mar  6 03:56:20.889: INFO: stderr: ""
Mar  6 03:56:20.889: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:56:20.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5125" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":234,"skipped":4202,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:56:20.897: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-5561
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 03:56:21.736: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar  6 03:56:23.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063781, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063781, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063781, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063781, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 03:56:26.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a validating webhook configuration
Mar  6 03:56:36.778: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:56:46.888: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:56:56.987: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:57:07.090: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:57:17.099: INFO: Waiting for webhook configuration to be ready...
Mar  6 03:57:17.099: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-5561".
STEP: Found 6 events.
Mar  6 03:57:17.112: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-kzbj8: {default-scheduler } Scheduled: Successfully assigned webhook-5561/sample-webhook-deployment-5f65f8c764-kzbj8 to worker02
Mar  6 03:57:17.112: INFO: At 2020-03-06 03:56:21 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 03:57:17.112: INFO: At 2020-03-06 03:56:21 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-kzbj8
Mar  6 03:57:17.112: INFO: At 2020-03-06 03:56:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-kzbj8: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 03:57:17.112: INFO: At 2020-03-06 03:56:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-kzbj8: {kubelet worker02} Created: Created container sample-webhook
Mar  6 03:57:17.112: INFO: At 2020-03-06 03:56:23 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-kzbj8: {kubelet worker02} Started: Started container sample-webhook
Mar  6 03:57:17.118: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:57:17.118: INFO: sample-webhook-deployment-5f65f8c764-kzbj8  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:56:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:56:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:56:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:56:21 +0000 UTC  }]
Mar  6 03:57:17.118: INFO: 
Mar  6 03:57:17.121: INFO: 
Logging node info for node master01
Mar  6 03:57:17.123: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 30340 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:57:17.123: INFO: 
Logging kubelet events for node master01
Mar  6 03:57:17.126: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:57:17.135: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.135: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:57:17.135: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.135: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:57:17.135: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.135: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:57:17.135: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.135: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:57:17.135: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:57:17.135: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:57:17.135: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:57:17.135: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.135: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:57:17.135: INFO: 	Container kube-flannel ready: true, restart count 0
W0306 03:57:17.139400      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:57:17.157: INFO: 
Latency metrics for node master01
Mar  6 03:57:17.157: INFO: 
Logging node info for node master02
Mar  6 03:57:17.159: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 30296 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:02 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:57:17.159: INFO: 
Logging kubelet events for node master02
Mar  6 03:57:17.161: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:57:17.170: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:57:17.170: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:57:17.170: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:57:17.170: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:57:17.170: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:57:17.170: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:57:17.170: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:57:17.170: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.170: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:57:17.170: INFO: 	Container kube-flannel ready: true, restart count 0
W0306 03:57:17.173099      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:57:17.186: INFO: 
Latency metrics for node master02
Mar  6 03:57:17.186: INFO: 
Logging node info for node master03
Mar  6 03:57:17.188: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 30299 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:57:17.189: INFO: 
Logging kubelet events for node master03
Mar  6 03:57:17.190: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:57:17.200: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:57:17.200: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:57:17.200: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:57:17.200: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 03:57:17.200: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:57:17.200: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:57:17.200: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:57:17.200: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:57:17.200: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:57:17.200: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:57:17.200: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.200: INFO: 	Container coredns ready: true, restart count 0
W0306 03:57:17.203056      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:57:17.218: INFO: 
Latency metrics for node master03
Mar  6 03:57:17.218: INFO: 
Logging node info for node worker01
Mar  6 03:57:17.220: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 30710 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:57:17.220: INFO: 
Logging kubelet events for node worker01
Mar  6 03:57:17.222: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:57:17.232: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:57:17.232: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:57:17.232: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:57:17.232: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:57:17.232: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:57:17.232: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:57:17.232: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:57:17.232: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:57:17.232: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:57:17.232: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:57:17.232: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:57:17.232: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:57:17.232: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:57:17.232: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.232: INFO: 	Container contour ready: false, restart count 0
W0306 03:57:17.234756      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:57:17.251: INFO: 
Latency metrics for node worker01
Mar  6 03:57:17.251: INFO: 
Logging node info for node worker02
Mar  6 03:57:17.253: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 30615 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:57 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:57:17.253: INFO: 
Logging kubelet events for node worker02
Mar  6 03:57:17.255: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:57:17.259: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:57:17.259: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:57:17.259: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:57:17.259: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:57:17.259: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:57:17.259: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:57:17.259: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:57:17.259: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:57:17.259: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:57:17.259: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:57:17.259: INFO: sample-webhook-deployment-5f65f8c764-kzbj8 started at 2020-03-06 03:56:21 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:57:17.259: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 03:57:17.262109      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:57:17.287: INFO: 
Latency metrics for node worker02
Mar  6 03:57:17.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5561" for this suite.
STEP: Destroying namespace "webhook-5561-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [56.465 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:57:17.100: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":234,"skipped":4204,"failed":17,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:57:17.363: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-8224
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8224
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar  6 03:57:17.510: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar  6 03:57:41.556: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8224 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:57:41.556: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:57:42.659: INFO: Found all expected endpoints: [netserver-0]
Mar  6 03:57:42.662: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.55 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8224 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 03:57:42.662: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 03:57:43.805: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:57:43.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8224" for this suite.

• [SLOW TEST:26.457 seconds]
[sig-network] Networking
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":4230,"failed":17,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:57:43.820: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename aggregator
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in aggregator-8308
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Mar  6 03:57:43.950: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Mar  6 03:57:44.290: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created
Mar  6 03:57:46.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:57:48.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:57:50.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:57:52.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:57:54.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:57:56.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:57:58.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:58:00.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:58:02.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719063864, loc:(*time.Location)(0x7db7bc0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar  6 03:59:04.747: INFO: Waited 1m0.398909068s for the sample-apiserver to be ready to handle requests.
Mar  6 03:59:04.747: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.k8s.io","selfLink":"/apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.wardle.k8s.io","uid":"741d3e7c-7f80-47cd-9a53-ea5499dc5763","resourceVersion":"31835","creationTimestamp":"2020-03-06T03:58:04Z"},"spec":{"service":{"namespace":"aggregator-8308","name":"sample-api","port":7443},"group":"wardle.k8s.io","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpBd016QTJNRE0xTnpRMFdoY05NekF3TXpBME1ETTFOelEwV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURYWDE4LzAwdUJxT3pGRmRWUW1sYWxVSnY3dythNkFUZFc5Q1BzUHpvRFJoNzIKbHdESVUybk0zUXBJdXNsWDAvRnYxYmFoSWxSN1JTZ3ZWUTgwWE1mVDBycFJ1SmFLYnJhUGlyZXk2MzViQUc3TQpMZjhrcmxGS3JsRlJ6TDlabVd1aHFlQkV5RDViSUlYdWNhUGpNWjk0aUdYOWsrQ01FeVB3aUUrQXdVTUs3WUdkCkJ4c2FNU1dMLzVhVGcybUI4QlIzc3FoRFNiTDN5anc0bUpZb1FFQlNDNWVxVFc5NFpCcDNseDVUbjdNZkhxbGgKZkVRWUx3dnFXZlJjSTltVkM4TWpTa0grbldPRVZKOXI4MUlUbXNDR3Q1Y01UUk51SWE4VXJ0VC9PZ0VsWWwrMQpWRVBLdU1Xb0N2RlJKWHhEUFhtS2VldGhyOTdTUFpneE1iNkhLU3ZwQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDclV3TDkKbHZqbCs1L3BseS9UbnArUkdDNlBabThjVUYycUJBMU1wdmRvRERxNjd4UlBPQTNiUU55eXRMRENQeUR4NncydApXNXN2RUVYZUFIV1BKRGFjYlVmM005ZTFKcnNsclc3ZG40OHk0MUx6bzBRbjdjRjNTSVpOZi9SaGt0R3JobVdRCm1KY2ZTNS9MS2xNTGpRZ0dkaFNOWFhLeCtETTQ2bTFBMXVxaHF1aUFyaGNORGJtTjc4Um5IWGNRSy9FS1NaVnUKMmRGQjVWcFpyVzVXMnI2K1k3ZS9VOCtvcmVFRWRWYzRDOE9PNE45eUpERHQ2WFF6MlNvMzZGV1F3ZjZtR1J2SwpTM2xKN01zUmRIRlFVdkJVR0RtRGVhRER1Z0xlTkpSb3VwdVNhS2JuYlhYZWduVU0ybXkva2I0U293bjNRZFJiCkJodkM3R3d3WHA1S2dORmoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2020-03-06T03:58:04Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.104.125.70:7443/apis/wardle.k8s.io/v1alpha1: Get https://10.104.125.70:7443/apis/wardle.k8s.io/v1alpha1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
Mar  6 03:59:04.747: INFO: current pods: {"metadata":{"selfLink":"/api/v1/namespaces/aggregator-8308/pods","resourceVersion":"32009"},"items":[{"metadata":{"name":"sample-apiserver-deployment-867766ffc6-rvgfd","generateName":"sample-apiserver-deployment-867766ffc6-","namespace":"aggregator-8308","selfLink":"/api/v1/namespaces/aggregator-8308/pods/sample-apiserver-deployment-867766ffc6-rvgfd","uid":"25c65b47-a66f-4cdc-9894-eadcfbd3c6fe","resourceVersion":"31814","creationTimestamp":"2020-03-06T03:57:44Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"867766ffc6"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-867766ffc6","uid":"ff98732e-a3fb-4a6c-ac1d-6fe6a06e298a","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"default-token-9g9df","secret":{"secretName":"default-token-9g9df","defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10","args":["--etcd-servers=http://localhost:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"default-token-9g9df","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.3","command":["/usr/local/bin/etcd"],"resources":{},"volumeMounts":[{"name":"default-token-9g9df","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"worker02","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-06T03:57:44Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-06T03:58:04Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-06T03:58:04Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-06T03:57:44Z"}],"hostIP":"192.168.1.251","podIP":"10.244.3.57","podIPs":[{"ip":"10.244.3.57"}],"startTime":"2020-03-06T03:57:44Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2020-03-06T03:58:02Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.3","imageID":"docker-pullable://k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646","containerID":"docker://760de42a17d2dc72eb4140cb4b350a1943d4cfa6a39880d900f403c858da31ac","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2020-03-06T03:58:03Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2020-03-06T03:57:48Z","finishedAt":"2020-03-06T03:57:59Z","containerID":"docker://5595e950a07e2b884de979304884916564a58391ae09dd8824383b3f71c76d8e"}},"ready":true,"restartCount":1,"image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10","imageID":"docker-pullable://gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0","containerID":"docker://51547fcf77d00528fdd6a0dcb493296f21cfcd02443bdd55b26ecf6560d11ca8","started":true}],"qosClass":"BestEffort"}}]}
Mar  6 03:59:04.761: INFO: logs of sample-apiserver-deployment-867766ffc6-rvgfd/sample-apiserver (error: ): I0306 03:58:03.820189       1 plugins.go:149] Loaded 3 admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,ValidatingAdmissionWebhook.
I0306 03:58:03.839629       1 serve.go:96] Serving securely on [::]:443
E0306 03:58:03.842565       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:03.844419       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:03.845235       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:04.844625       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:04.851416       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:04.851465       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:05.846498       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:05.854813       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:05.855292       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:06.848341       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:06.859078       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:06.859318       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:07.850109       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:07.863033       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:07.863903       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:08.851957       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:08.867396       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:08.867807       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:09.853861       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:09.872824       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:09.872864       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:10.855853       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:10.876375       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:10.876598       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:11.857722       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:11.879792       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:11.880317       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:12.859549       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:12.883396       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:12.883419       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:13.861714       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:13.886909       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:13.886946       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:14.863552       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:14.892917       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:14.892986       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:15.865345       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:15.895444       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:15.896992       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:16.867219       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:16.897848       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:16.900164       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:17.869007       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:17.900218       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:17.903413       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:18.870981       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:18.902683       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:18.906550       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:19.872923       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:19.905614       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:19.911270       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:20.874908       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:20.908139       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:20.915675       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:21.876769       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:21.910596       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:21.918870       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:22.878619       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:22.913075       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:22.921916       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:23.880619       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:23.916259       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:23.924852       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:24.882603       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:24.920777       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:24.927817       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:25.884543       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:25.923198       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:25.930922       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:26.886501       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:26.925924       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:26.933978       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:27.888319       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:27.928290       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:27.937036       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:28.890414       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:28.930936       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:28.944242       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:29.892311       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:29.933425       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:29.947497       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:30.894111       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:30.935984       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:30.950422       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:31.895891       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:31.938471       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:31.953535       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:32.897663       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:32.940904       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:32.956264       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:33.899836       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:33.943480       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:33.962500       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:34.901664       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:34.945903       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:34.965416       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:35.903497       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:35.948552       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:35.968571       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:36.905450       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:36.951016       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:36.971628       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:37.907398       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:37.953374       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:37.974567       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:38.909319       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:38.956546       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:38.977787       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:39.911266       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:39.959001       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:39.980889       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:40.913745       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:40.961425       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:40.983968       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:41.915741       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:41.963976       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:41.987036       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:42.917539       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:42.968535       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:42.990141       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:43.919723       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:43.970981       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:43.993259       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:44.921561       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:44.973667       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:44.996463       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:45.923670       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:45.976095       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:45.999637       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:46.925493       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:46.981420       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:47.005917       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:47.927332       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:47.983899       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:48.009321       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:48.929211       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:48.986355       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:49.012409       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:49.931074       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:49.988847       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:50.015691       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:50.932921       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:50.991441       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:51.018837       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:51.934714       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:51.993868       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:52.024243       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:52.936435       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:52.996318       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:53.027356       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:53.938574       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:53.998670       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:54.030320       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:54.940471       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:55.001120       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:55.033731       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:55.942380       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:56.003665       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:56.036854       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:56.944332       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:57.007632       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:57.040147       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:57.946243       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:58.011126       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:58.043316       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:58.948018       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:58:59.013632       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:59.046339       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:58:59.949856       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:59:00.016233       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:00.049439       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:00.951635       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:59:01.018698       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:01.054671       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:01.953547       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:59:02.021248       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:02.057790       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:02.955303       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:59:03.023818       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:03.061043       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:03.957459       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1.Namespace: unknown (get namespaces)
E0306 03:59:04.026223       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.ValidatingWebhookConfiguration: unknown (get validatingwebhookconfigurations.admissionregistration.k8s.io)
E0306 03:59:04.064180       1 reflector.go:322] k8s.io/sample-apiserver/vendor/k8s.io/client-go/informers/factory.go:87: Failed to watch *v1beta1.MutatingWebhookConfiguration: unknown (get mutatingwebhookconfigurations.admissionregistration.k8s.io)

Mar  6 03:59:04.769: INFO: logs of sample-apiserver-deployment-867766ffc6-rvgfd/etcd (error: ): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-06 03:58:02.510259 I | etcdmain: etcd Version: 3.4.3
2020-03-06 03:58:02.510291 I | etcdmain: Git SHA: 3cf2f69b5
2020-03-06 03:58:02.510295 I | etcdmain: Go Version: go1.12.12
2020-03-06 03:58:02.510299 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-06 03:58:02.510303 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-06 03:58:02.510312 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-06 03:58:02.510801 I | embed: name = default
2020-03-06 03:58:02.510809 I | embed: data dir = default.etcd
2020-03-06 03:58:02.510813 I | embed: member dir = default.etcd/member
2020-03-06 03:58:02.510817 I | embed: heartbeat = 100ms
2020-03-06 03:58:02.510821 I | embed: election = 1000ms
2020-03-06 03:58:02.510824 I | embed: snapshot count = 100000
2020-03-06 03:58:02.510839 I | embed: advertise client URLs = http://localhost:2379
2020-03-06 03:58:02.517887 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
raft2020/03/06 03:58:02 INFO: 8e9e05c52164694d switched to configuration voters=()
raft2020/03/06 03:58:02 INFO: 8e9e05c52164694d became follower at term 0
raft2020/03/06 03:58:02 INFO: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/03/06 03:58:02 INFO: 8e9e05c52164694d became follower at term 1
raft2020/03/06 03:58:02 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
2020-03-06 03:58:02.524778 W | auth: simple token is not cryptographically signed
2020-03-06 03:58:02.526897 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-03-06 03:58:02.528607 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-03-06 03:58:02.528923 I | embed: listening for peers on 127.0.0.1:2380
raft2020/03/06 03:58:02 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
2020-03-06 03:58:02.529298 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
raft2020/03/06 03:58:03 INFO: 8e9e05c52164694d is starting a new election at term 1
raft2020/03/06 03:58:03 INFO: 8e9e05c52164694d became candidate at term 2
raft2020/03/06 03:58:03 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
raft2020/03/06 03:58:03 INFO: 8e9e05c52164694d became leader at term 2
raft2020/03/06 03:58:03 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
2020-03-06 03:58:03.018513 I | etcdserver: setting up the initial cluster version to 3.4
2020-03-06 03:58:03.019237 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-06 03:58:03.019286 I | etcdserver/api: enabled capabilities for version 3.4
2020-03-06 03:58:03.019312 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2020-03-06 03:58:03.019372 I | embed: ready to serve client requests
2020-03-06 03:58:03.020160 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

Mar  6 03:59:04.769: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "aggregator-8308".
STEP: Found 12 events.
Mar  6 03:59:05.083: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {default-scheduler } Scheduled: Successfully assigned aggregator-8308/sample-apiserver-deployment-867766ffc6-rvgfd to worker02
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:44 +0000 UTC - event for sample-apiserver-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-apiserver-deployment-867766ffc6 to 1
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:44 +0000 UTC - event for sample-apiserver-deployment-867766ffc6: {replicaset-controller } SuccessfulCreate: Created pod: sample-apiserver-deployment-867766ffc6-rvgfd
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:45 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Pulling: Pulling image "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10"
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:47 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Pulled: Successfully pulled image "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10"
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:48 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Created: Created container sample-apiserver
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:48 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Started: Started container sample-apiserver
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:57:48 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Pulling: Pulling image "k8s.gcr.io/etcd:3.4.3"
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:58:02 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Pulled: Successfully pulled image "k8s.gcr.io/etcd:3.4.3"
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:58:02 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Created: Created container etcd
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:58:02 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Started: Started container etcd
Mar  6 03:59:05.083: INFO: At 2020-03-06 03:58:03 +0000 UTC - event for sample-apiserver-deployment-867766ffc6-rvgfd: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10" already present on machine
Mar  6 03:59:05.087: INFO: POD                                           NODE      PHASE    GRACE  CONDITIONS
Mar  6 03:59:05.087: INFO: sample-apiserver-deployment-867766ffc6-rvgfd  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:57:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:58:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:58:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 03:57:44 +0000 UTC  }]
Mar  6 03:59:05.087: INFO: 
Mar  6 03:59:05.092: INFO: 
Logging node info for node master01
Mar  6 03:59:05.096: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 30340 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:54:06 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:59:05.096: INFO: 
Logging kubelet events for node master01
Mar  6 03:59:05.115: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 03:59:05.142: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.142: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:05.142: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:59:05.142: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.142: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:59:05.142: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:59:05.142: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.142: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:59:05.142: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.142: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:59:05.142: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.142: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:59:05.142: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.142: INFO: 	Container kube-scheduler ready: true, restart count 1
W0306 03:59:05.146083      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:59:05.176: INFO: 
Latency metrics for node master01
Mar  6 03:59:05.176: INFO: 
Logging node info for node master02
Mar  6 03:59:05.178: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 32005 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:59:05.178: INFO: 
Logging kubelet events for node master02
Mar  6 03:59:05.180: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 03:59:05.194: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 03:59:05.194: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:59:05.194: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:59:05.194: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:59:05.194: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:59:05.194: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:59:05.194: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:59:05.194: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.194: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:05.194: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 03:59:05.196597      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:59:05.251: INFO: 
Latency metrics for node master02
Mar  6 03:59:05.251: INFO: 
Logging node info for node master03
Mar  6 03:59:05.255: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 32008 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:59:03 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:59:05.255: INFO: 
Logging kubelet events for node master03
Mar  6 03:59:05.257: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 03:59:05.270: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:59:05.270: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:59:05.270: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 03:59:05.270: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 03:59:05.270: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 03:59:05.270: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:59:05.270: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 03:59:05.270: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container coredns ready: true, restart count 0
Mar  6 03:59:05.270: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:05.270: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:59:05.270: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.270: INFO: 	Container kube-apiserver ready: true, restart count 0
W0306 03:59:05.273971      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:59:05.332: INFO: 
Latency metrics for node master03
Mar  6 03:59:05.332: INFO: 
Logging node info for node worker01
Mar  6 03:59:05.347: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 30710 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:55:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:59:05.347: INFO: 
Logging kubelet events for node worker01
Mar  6 03:59:05.350: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 03:59:05.361: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:59:05.361: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:59:05.361: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:59:05.361: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:59:05.361: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:05.361: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:59:05.361: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:59:05.361: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:59:05.361: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:59:05.361: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:59:05.361: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:59:05.361: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:59:05.361: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.361: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:59:05.361: INFO: 	Container envoy ready: false, restart count 0
W0306 03:59:05.368219      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:59:05.435: INFO: 
Latency metrics for node worker01
Mar  6 03:59:05.435: INFO: 
Logging node info for node worker02
Mar  6 03:59:05.461: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 31864 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 03:58:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 03:58:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 03:58:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 03:58:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 03:59:05.461: INFO: 
Logging kubelet events for node worker02
Mar  6 03:59:05.468: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 03:59:05.489: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:59:05.489: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:59:05.489: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:05.489: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:59:05.489: INFO: sample-apiserver-deployment-867766ffc6-rvgfd started at 2020-03-06 03:57:44 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Container etcd ready: true, restart count 0
Mar  6 03:59:05.489: INFO: 	Container sample-apiserver ready: true, restart count 1
Mar  6 03:59:05.489: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 03:59:05.489: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:59:05.489: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 03:59:05.489: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:59:05.489: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 03:59:05.489: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:59:05.489: INFO: 	Container sonobuoy-worker ready: true, restart count 0
W0306 03:59:05.499603      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 03:59:05.584: INFO: 
Latency metrics for node worker02
Mar  6 03:59:05.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8308" for this suite.

• Failure [81.808 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 03:59:04.769: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:391
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":235,"skipped":4244,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:05.627: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-61
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-ed9bef43-3506-4683-a010-c270f7868e15
STEP: Creating a pod to test consume configMaps
Mar  6 03:59:05.780: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c" in namespace "projected-61" to be "success or failure"
Mar  6 03:59:05.782: INFO: Pod "pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421954ms
Mar  6 03:59:07.785: INFO: Pod "pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004872463s
STEP: Saw pod success
Mar  6 03:59:07.785: INFO: Pod "pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c" satisfied condition "success or failure"
Mar  6 03:59:07.787: INFO: Trying to get logs from node worker02 pod pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c container projected-configmap-volume-test: 
STEP: delete the pod
Mar  6 03:59:07.800: INFO: Waiting for pod pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c to disappear
Mar  6 03:59:07.803: INFO: Pod pod-projected-configmaps-0f4aa5d4-ef41-4216-93e7-29058830a69c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:07.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-61" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":4245,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:07.814: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename downward-api
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in downward-api-3871
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 03:59:07.948: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf" in namespace "downward-api-3871" to be "success or failure"
Mar  6 03:59:07.950: INFO: Pod "downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.419288ms
Mar  6 03:59:09.952: INFO: Pod "downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004719762s
STEP: Saw pod success
Mar  6 03:59:09.952: INFO: Pod "downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf" satisfied condition "success or failure"
Mar  6 03:59:09.955: INFO: Trying to get logs from node worker02 pod downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf container client-container: 
STEP: delete the pod
Mar  6 03:59:09.969: INFO: Waiting for pod downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf to disappear
Mar  6 03:59:09.972: INFO: Pod downwardapi-volume-d63967ff-461b-4b0b-b16b-57d2e4ef12cf no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:09.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3871" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":4270,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:09.978: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename daemonsets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-931
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:59:10.129: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Mar  6 03:59:10.134: INFO: Number of nodes with available pods: 0
Mar  6 03:59:10.134: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Mar  6 03:59:10.148: INFO: Number of nodes with available pods: 0
Mar  6 03:59:10.148: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:59:11.152: INFO: Number of nodes with available pods: 0
Mar  6 03:59:11.152: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:59:12.151: INFO: Number of nodes with available pods: 1
Mar  6 03:59:12.151: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Mar  6 03:59:12.163: INFO: Number of nodes with available pods: 1
Mar  6 03:59:12.163: INFO: Number of running nodes: 0, number of available pods: 1
Mar  6 03:59:13.166: INFO: Number of nodes with available pods: 0
Mar  6 03:59:13.166: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Mar  6 03:59:13.171: INFO: Number of nodes with available pods: 0
Mar  6 03:59:13.171: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:59:14.174: INFO: Number of nodes with available pods: 0
Mar  6 03:59:14.174: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:59:15.174: INFO: Number of nodes with available pods: 0
Mar  6 03:59:15.174: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:59:16.174: INFO: Number of nodes with available pods: 0
Mar  6 03:59:16.174: INFO: Node worker02 is running more than one daemon pod
Mar  6 03:59:17.174: INFO: Number of nodes with available pods: 1
Mar  6 03:59:17.174: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-931, will wait for the garbage collector to delete the pods
Mar  6 03:59:17.235: INFO: Deleting DaemonSet.extensions daemon-set took: 4.9518ms
Mar  6 03:59:17.335: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.114322ms
Mar  6 03:59:25.237: INFO: Number of nodes with available pods: 0
Mar  6 03:59:25.237: INFO: Number of running nodes: 0, number of available pods: 0
Mar  6 03:59:25.238: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-931/daemonsets","resourceVersion":"32219"},"items":null}

Mar  6 03:59:25.240: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-931/pods","resourceVersion":"32219"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:25.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-931" for this suite.

• [SLOW TEST:15.281 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":238,"skipped":4270,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:25.259: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename events
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in events-1799
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Mar  6 03:59:27.424: INFO: &Pod{ObjectMeta:{send-events-397faf29-a0a1-4cde-af0e-ec97abcb0999  events-1799 /api/v1/namespaces/events-1799/pods/send-events-397faf29-a0a1-4cde-af0e-ec97abcb0999 bcd53780-067a-4333-80b0-b070f6663b38 32238 0 2020-03-06 03:59:25 +0000 UTC   map[name:foo time:407088895] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j54kj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j54kj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j54kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.62,StartTime:2020-03-06 03:59:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://c64bfb1e1f6528df35b80b4f9d3139c7b6f68567b0cec4341ed4eb2943a7ce4b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.62,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Mar  6 03:59:29.426: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Mar  6 03:59:31.429: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:31.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1799" for this suite.

• [SLOW TEST:6.187 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":239,"skipped":4271,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:31.446: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-9646
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Mar  6 03:59:34.103: INFO: Successfully updated pod "annotationupdatefd122ea9-3168-4803-a585-6f53dd8d5101"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:36.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9646" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4272,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:36.125: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename deployment
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in deployment-6970
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 03:59:36.255: INFO: Creating deployment "webserver-deployment"
Mar  6 03:59:36.258: INFO: Waiting for observed generation 1
Mar  6 03:59:38.265: INFO: Waiting for all required pods to come up
Mar  6 03:59:38.270: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Mar  6 03:59:40.284: INFO: Waiting for deployment "webserver-deployment" to complete
Mar  6 03:59:40.287: INFO: Updating deployment "webserver-deployment" with a non-existent image
Mar  6 03:59:40.296: INFO: Updating deployment webserver-deployment
Mar  6 03:59:40.296: INFO: Waiting for observed generation 2
Mar  6 03:59:42.300: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Mar  6 03:59:42.304: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Mar  6 03:59:42.308: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar  6 03:59:42.315: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Mar  6 03:59:42.315: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Mar  6 03:59:42.318: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Mar  6 03:59:42.321: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Mar  6 03:59:42.321: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Mar  6 03:59:42.326: INFO: Updating deployment webserver-deployment
Mar  6 03:59:42.326: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Mar  6 03:59:42.331: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Mar  6 03:59:42.337: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Mar  6 03:59:42.360: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-6970 /apis/apps/v1/namespaces/deployment-6970/deployments/webserver-deployment 3f962e33-a494-4fa9-9318-3806afd9453b 32537 3 2020-03-06 03:59:36 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0018f9818  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-06 03:59:40 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-06 03:59:42 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Mar  6 03:59:42.369: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-6970 /apis/apps/v1/namespaces/deployment-6970/replicasets/webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 32529 3 2020-03-06 03:59:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3f962e33-a494-4fa9-9318-3806afd9453b 0xc003d799a7 0xc003d799a8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d79a18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:59:42.369: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Mar  6 03:59:42.369: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-6970 /apis/apps/v1/namespaces/deployment-6970/replicasets/webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 32527 3 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3f962e33-a494-4fa9-9318-3806afd9453b 0xc003d798e7 0xc003d798e8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d79948  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-54wwk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-54wwk webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-54wwk 16c6d378-fb83-4ca4-ac57-d222cd79a31b 32432 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0018f9c07 0xc0018f9c08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.68,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e912527d89a5b72681eff6231abe12f8653d1af1e46d9c7ce002835ff887d6f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-62zgf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-62zgf webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-62zgf 811b51bb-c151-4e27-8875-3c85a067bb3d 32437 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0018f9d80 0xc0018f9d81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.67,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://80d83de6afdd72b425209798634176b5c816ac9468230d9c2f797a822e5543f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-6bzcw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6bzcw webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-6bzcw dc9a1cf4-a51b-4dec-9bdb-6e311be60c3e 32426 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0018f9ef0 0xc0018f9ef1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.65,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9f98fb9eee3a85f9bc1191ecf4ecf059600f54fb9f3832695c0d41fbccf3f91c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-d5pdc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-d5pdc webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-d5pdc b6690002-eac2-4957-ae86-e4411bd93f5f 32560 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4060 0xc0045f4061}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-ff5ql" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ff5ql webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-ff5ql 145d1043-8e67-4121-a5a7-8a14a39862a8 32416 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4147 0xc0045f4148}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.250,PodIP:10.244.4.68,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://87b8993c3ae82e8cf466adce59af9806cdd8cae43cebb09554ca5c478d9769d5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-hk8f9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hk8f9 webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-hk8f9 cad77f8d-7464-49f6-878a-ef3e08cf6ef0 32413 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f42c0 0xc0045f42c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.250,PodIP:10.244.4.67,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://beb5a32911e0ae08241a89787315ee399cb314fbfa76d184ad399f1cc16d83ef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-j6tkp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-j6tkp webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-j6tkp 6793fb62-240c-4eea-868c-69dc4a086d3f 32565 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4430 0xc0045f4431}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.392: INFO: Pod "webserver-deployment-595b5b9587-jxvg9" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jxvg9 webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-jxvg9 c29c0df8-1ace-4191-948b-737f0a6ddecf 32549 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4517 0xc0045f4518}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-k4gkw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k4gkw webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-k4gkw 22a8f815-e8e1-431f-9b15-097dd14d66f9 32544 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4630 0xc0045f4631}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-kf9kp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kf9kp webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-kf9kp d91bfefb-516c-4cdc-9b77-07bd048f7540 32569 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4740 0xc0045f4741}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-kthtv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kthtv webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-kthtv a8b13d14-7e09-4af1-88a1-d03bbed03c0d 32567 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4827 0xc0045f4828}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:,StartTime:2020-03-06 03:59:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-mbs2g" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mbs2g webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-mbs2g b0e8d69a-05ad-4c67-9b7e-27b2a1f79af7 32551 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4987 0xc0045f4988}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-nwx72" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nwx72 webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-nwx72 ea0cc49b-d397-45bb-bb5f-be7a78621990 32562 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4aa0 0xc0045f4aa1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-tpv2c" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tpv2c webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-tpv2c 3c10efa2-78fb-4911-8791-867e14e201b2 32410 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4bb0 0xc0045f4bb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.250,PodIP:10.244.4.65,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://bc1a201682d47543ca12fc5d8bc3d8cdac6b43b6559e9a64485f7fded11cfb73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.393: INFO: Pod "webserver-deployment-595b5b9587-vj4xt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vj4xt webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-vj4xt 0c094700-836e-4871-a844-5be386e82cd4 32429 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4d20 0xc0045f4d21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.64,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7e31ab822658493dfc037fa509fbf78e79414ae250f905176c45533d103ec343,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-595b5b9587-wqsbp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wqsbp webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-wqsbp b23a421d-cb94-486f-8b79-c5b41125f2c2 32566 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4e90 0xc0045f4e91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-595b5b9587-wxz2s" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wxz2s webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-wxz2s d5125e43-ad19-40ae-a1f2-15c53157f800 32434 0 2020-03-06 03:59:36 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f4f77 0xc0045f4f78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:10.244.3.66,StartTime:2020-03-06 03:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-06 03:59:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a5ff8fef30cbc5cd3c4152c84b866ed7c93de7418a8787d1cecca4821b23e552,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-595b5b9587-xbkxq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbkxq webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-xbkxq 8af80211-7702-4d86-8280-fc994060c94c 32542 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f50f0 0xc0045f50f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-595b5b9587-xxs6q" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xxs6q webserver-deployment-595b5b9587- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-595b5b9587-xxs6q 3dc34a9e-0c7f-4d19-9f37-5a782f7a9b11 32553 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 485480e7-794e-41d9-9558-cc242d7d1c33 0xc0045f5200 0xc0045f5201}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-c7997dcc8-52hhz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-52hhz webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-52hhz 7752f9e8-85c0-4348-ab62-87e9f3e3d4ef 32558 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5310 0xc0045f5311}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-c7997dcc8-64b52" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-64b52 webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-64b52 bfb44dc5-5ea4-479c-8838-5de635dd7e24 32571 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5430 0xc0045f5431}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-c7997dcc8-89xjw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-89xjw webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-89xjw 730ebbbf-886b-4c2f-9412-1e3433f96fb7 32505 0 2020-03-06 03:59:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5550 0xc0045f5551}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.250,PodIP:,StartTime:2020-03-06 03:59:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.394: INFO: Pod "webserver-deployment-c7997dcc8-fsrhg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fsrhg webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-fsrhg 01dd8b45-93fd-4360-b52e-730b433c529a 32504 0 2020-03-06 03:59:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f56c7 0xc0045f56c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.250,PodIP:,StartTime:2020-03-06 03:59:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-g5tsb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g5tsb webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-g5tsb df52ea03-80ab-49fd-81d7-3c8db5ac853b 32484 0 2020-03-06 03:59:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5847 0xc0045f5848}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:,StartTime:2020-03-06 03:59:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-hkl5g" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hkl5g webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-hkl5g 290c70d8-a393-4e38-98b3-43f6309db6c4 32554 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f59d7 0xc0045f59d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-jdvbq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jdvbq webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-jdvbq 2d8b1b98-db3f-4716-82ae-df37aeff17f4 32483 0 2020-03-06 03:59:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5b00 0xc0045f5b01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.250,PodIP:,StartTime:2020-03-06 03:59:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-npbqp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npbqp webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-npbqp 3eed9468-f35d-43c9-840e-4d69535bb8f5 32572 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5c77 0xc0045f5c78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker01,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-p9h6z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p9h6z webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-p9h6z 01063f55-183b-49d7-b38f-50004bcf5945 32540 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5da0 0xc0045f5da1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-pbfhr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pbfhr webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-pbfhr c1766ae5-550b-493f-b894-4b2dd4fdadfb 32475 0 2020-03-06 03:59:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0045f5ec0 0xc0045f5ec1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:worker02,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-06 03:59:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.1.251,PodIP:,StartTime:2020-03-06 03:59:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-qx9rv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qx9rv webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-qx9rv a8edc3da-3142-45bc-9a93-e3ae755f69b6 32557 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0044f0037 0xc0044f0038}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar  6 03:59:42.395: INFO: Pod "webserver-deployment-c7997dcc8-z8hlr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z8hlr webserver-deployment-c7997dcc8- deployment-6970 /api/v1/namespaces/deployment-6970/pods/webserver-deployment-c7997dcc8-z8hlr 12e7f8d9-70fe-4493-8d7b-bb99c3dea189 32563 0 2020-03-06 03:59:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 ed773729-d0b9-4f1c-94af-eda601cd2275 0xc0044f0137 0xc0044f0138}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l6cl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l6cl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l6cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:42.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6970" for this suite.

• [SLOW TEST:6.305 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":241,"skipped":4284,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:42.431: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename configmap
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in configmap-1145
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-46feb34e-47c4-4003-b6cd-3414ed614bee
STEP: Creating a pod to test consume configMaps
Mar  6 03:59:42.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50" in namespace "configmap-1145" to be "success or failure"
Mar  6 03:59:42.577: INFO: Pod "pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079636ms
Mar  6 03:59:44.580: INFO: Pod "pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004508211s
Mar  6 03:59:46.582: INFO: Pod "pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007103331s
Mar  6 03:59:48.587: INFO: Pod "pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011458284s
STEP: Saw pod success
Mar  6 03:59:48.587: INFO: Pod "pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50" satisfied condition "success or failure"
Mar  6 03:59:48.589: INFO: Trying to get logs from node worker01 pod pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50 container configmap-volume-test: 
STEP: delete the pod
Mar  6 03:59:48.621: INFO: Waiting for pod pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50 to disappear
Mar  6 03:59:48.624: INFO: Pod pod-configmaps-7eef15ae-6580-47f0-9e78-b8a274c54b50 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 03:59:48.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1145" for this suite.

• [SLOW TEST:6.201 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":4295,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 03:59:48.632: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename sched-pred
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-299
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Mar  6 03:59:48.779: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  6 03:59:48.786: INFO: Waiting for terminating namespaces to be deleted...
Mar  6 03:59:48.788: INFO: 
Logging pods the kubelet thinks is on node worker01 before test
Mar  6 03:59:48.796: INFO: contour-54748c65f5-jl5wz from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:59:48.796: INFO: metrics-server-78799bf646-xrsnn from kube-system started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 03:59:48.796: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:48.796: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:59:48.796: INFO: kube-proxy-kcb8f from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 03:59:48.796: INFO: contour-54748c65f5-gk5sz from projectcontour started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:59:48.796: INFO: kube-flannel-ds-amd64-xxhz9 from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 03:59:48.796: INFO: envoy-lvmcb from projectcontour started at 2020-03-06 02:30:45 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container envoy ready: false, restart count 0
Mar  6 03:59:48.796: INFO: kuard-678c676f5d-tzsnn from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:59:48.796: INFO: contour-certgen-82k46 from projectcontour started at 2020-03-06 02:30:46 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container contour ready: false, restart count 0
Mar  6 03:59:48.796: INFO: kuard-678c676f5d-vsn86 from default started at 2020-03-06 02:30:51 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:59:48.796: INFO: kuard-678c676f5d-m29b6 from default started at 2020-03-06 02:30:49 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.796: INFO: 	Container kuard ready: true, restart count 0
Mar  6 03:59:48.796: INFO: 
Logging pods the kubelet thinks is on node worker02 before test
Mar  6 03:59:48.801: INFO: kube-proxy-5xxdb from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 03:59:48.801: INFO: send-events-397faf29-a0a1-4cde-af0e-ec97abcb0999 from events-1799 started at 2020-03-06 03:59:25 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container p ready: true, restart count 0
Mar  6 03:59:48.801: INFO: kube-flannel-ds-amd64-ztfzf from kube-system started at 2020-03-06 02:30:30 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 03:59:48.801: INFO: sonobuoy from sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 03:59:48.801: INFO: sonobuoy-e2e-job-67137ff64ac145d3 from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container e2e ready: true, restart count 0
Mar  6 03:59:48.801: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 03:59:48.801: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd from sonobuoy started at 2020-03-06 02:38:12 +0000 UTC (2 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 03:59:48.801: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 03:59:48.801: INFO: envoy-wgz76 from projectcontour started at 2020-03-06 02:30:55 +0000 UTC (1 container statuses recorded)
Mar  6 03:59:48.801: INFO: 	Container envoy ready: false, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1491fcab-837c-4c2f-96b9-31e5777611f1 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-1491fcab-837c-4c2f-96b9-31e5777611f1 off the node worker02
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1491fcab-837c-4c2f-96b9-31e5777611f1
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:04:54.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-299" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:306.254 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":243,"skipped":4298,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:04:54.886: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename job
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in job-3439
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Mar  6 04:04:59.543: INFO: Successfully updated pod "adopt-release-l5p5q"
STEP: Checking that the Job readopts the Pod
Mar  6 04:04:59.543: INFO: Waiting up to 15m0s for pod "adopt-release-l5p5q" in namespace "job-3439" to be "adopted"
Mar  6 04:04:59.545: INFO: Pod "adopt-release-l5p5q": Phase="Running", Reason="", readiness=true. Elapsed: 1.894352ms
Mar  6 04:05:01.548: INFO: Pod "adopt-release-l5p5q": Phase="Running", Reason="", readiness=true. Elapsed: 2.004642804s
Mar  6 04:05:01.548: INFO: Pod "adopt-release-l5p5q" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Mar  6 04:05:02.053: INFO: Successfully updated pod "adopt-release-l5p5q"
STEP: Checking that the Job releases the Pod
Mar  6 04:05:02.054: INFO: Waiting up to 15m0s for pod "adopt-release-l5p5q" in namespace "job-3439" to be "released"
Mar  6 04:05:02.056: INFO: Pod "adopt-release-l5p5q": Phase="Running", Reason="", readiness=true. Elapsed: 2.590677ms
Mar  6 04:05:04.058: INFO: Pod "adopt-release-l5p5q": Phase="Running", Reason="", readiness=true. Elapsed: 2.004814226s
Mar  6 04:05:04.058: INFO: Pod "adopt-release-l5p5q" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:04.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3439" for this suite.

• [SLOW TEST:9.179 seconds]
[sig-apps] Job
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":244,"skipped":4299,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:04.065: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename resourcequota
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in resourcequota-378
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:17.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-378" for this suite.

• [SLOW TEST:13.186 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":245,"skipped":4318,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:17.251: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-4256
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8572308d-b5d7-49f0-ae03-6461bf659cdc
STEP: Creating a pod to test consume secrets
Mar  6 04:05:17.395: INFO: Waiting up to 5m0s for pod "pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093" in namespace "secrets-4256" to be "success or failure"
Mar  6 04:05:17.397: INFO: Pod "pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.359225ms
Mar  6 04:05:19.401: INFO: Pod "pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005935379s
STEP: Saw pod success
Mar  6 04:05:19.401: INFO: Pod "pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093" satisfied condition "success or failure"
Mar  6 04:05:19.403: INFO: Trying to get logs from node worker02 pod pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093 container secret-volume-test: 
STEP: delete the pod
Mar  6 04:05:19.427: INFO: Waiting for pod pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093 to disappear
Mar  6 04:05:19.431: INFO: Pod pod-secrets-c72db646-7ffd-40b7-bbc0-1424fd037093 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4256" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4329,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:19.438: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-2932
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 04:05:19.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728" in namespace "projected-2932" to be "success or failure"
Mar  6 04:05:19.577: INFO: Pod "downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07206ms
Mar  6 04:05:21.579: INFO: Pod "downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005367547s
STEP: Saw pod success
Mar  6 04:05:21.579: INFO: Pod "downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728" satisfied condition "success or failure"
Mar  6 04:05:21.581: INFO: Trying to get logs from node worker02 pod downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728 container client-container: 
STEP: delete the pod
Mar  6 04:05:21.593: INFO: Waiting for pod downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728 to disappear
Mar  6 04:05:21.599: INFO: Pod downwardapi-volume-6416bae8-a687-4b34-aa7a-18513844d728 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:21.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2932" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4366,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:21.608: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename replicaset
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in replicaset-7132
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 04:05:21.743: INFO: Creating ReplicaSet my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab
Mar  6 04:05:21.748: INFO: Pod name my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab: Found 0 pods out of 1
Mar  6 04:05:26.752: INFO: Pod name my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab: Found 1 pods out of 1
Mar  6 04:05:26.752: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab" is running
Mar  6 04:05:26.755: INFO: Pod "my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab-dv749" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 04:05:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 04:05:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 04:05:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-06 04:05:21 +0000 UTC Reason: Message:}])
Mar  6 04:05:26.755: INFO: Trying to dial the pod
Mar  6 04:05:31.763: INFO: Controller my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab: Got expected result from replica 1 [my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab-dv749]: "my-hostname-basic-7c145699-34a4-46f6-888a-3c4337ba21ab-dv749", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:31.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7132" for this suite.

• [SLOW TEST:10.161 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":248,"skipped":4369,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:31.770: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pod-network-test
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pod-network-test-9331
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9331
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar  6 04:05:31.899: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Mar  6 04:05:47.956: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.85:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9331 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 04:05:47.956: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 04:05:48.063: INFO: Found all expected endpoints: [netserver-0]
Mar  6 04:05:48.065: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.88:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9331 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar  6 04:05:48.065: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
Mar  6 04:05:48.215: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:48.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9331" for this suite.

• [SLOW TEST:16.453 seconds]
[sig-network] Networking
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4397,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:48.223: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename containers
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in containers-8604
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Mar  6 04:05:48.359: INFO: Waiting up to 5m0s for pod "client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9" in namespace "containers-8604" to be "success or failure"
Mar  6 04:05:48.361: INFO: Pod "client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144904ms
Mar  6 04:05:50.364: INFO: Pod "client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004441165s
STEP: Saw pod success
Mar  6 04:05:50.364: INFO: Pod "client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9" satisfied condition "success or failure"
Mar  6 04:05:50.366: INFO: Trying to get logs from node worker02 pod client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9 container test-container: 
STEP: delete the pod
Mar  6 04:05:50.378: INFO: Waiting for pod client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9 to disappear
Mar  6 04:05:50.380: INFO: Pod client-containers-d28d3eb0-a16b-48cb-8bf2-3824058da5a9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:05:50.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8604" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4398,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:05:50.387: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-344
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 04:05:50.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-344'
Mar  6 04:05:55.618: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar  6 04:05:55.618: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793
Mar  6 04:05:55.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete jobs e2e-test-httpd-job --namespace=kubectl-344'
Mar  6 04:06:10.708: INFO: stderr: ""
Mar  6 04:06:10.708: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:06:10.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-344" for this suite.

• [SLOW TEST:20.335 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run job
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1784
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":251,"skipped":4428,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:06:10.723: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename pods
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-8139
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Mar  6 04:06:12.873: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-780690759 proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Mar  6 04:06:27.934: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:06:27.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8139" for this suite.

• [SLOW TEST:17.220 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":252,"skipped":4438,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:06:27.943: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-3400
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Mar  6 04:06:28.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5" in namespace "projected-3400" to be "success or failure"
Mar  6 04:06:28.079: INFO: Pod "downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.766508ms
Mar  6 04:06:30.082: INFO: Pod "downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004204114s
STEP: Saw pod success
Mar  6 04:06:30.082: INFO: Pod "downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5" satisfied condition "success or failure"
Mar  6 04:06:30.083: INFO: Trying to get logs from node worker02 pod downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5 container client-container: 
STEP: delete the pod
Mar  6 04:06:30.098: INFO: Waiting for pod downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5 to disappear
Mar  6 04:06:30.100: INFO: Pod downwardapi-volume-d0911963-2fd5-4a55-b5c3-93f4f14537f5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:06:30.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3400" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4444,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:06:30.111: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename secrets
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in secrets-5247
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-f49459ba-8061-485f-8be7-14fdc518cf8f
STEP: Creating a pod to test consume secrets
Mar  6 04:06:30.252: INFO: Waiting up to 5m0s for pod "pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0" in namespace "secrets-5247" to be "success or failure"
Mar  6 04:06:30.254: INFO: Pod "pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033525ms
Mar  6 04:06:32.257: INFO: Pod "pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004946667s
STEP: Saw pod success
Mar  6 04:06:32.257: INFO: Pod "pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0" satisfied condition "success or failure"
Mar  6 04:06:32.259: INFO: Trying to get logs from node worker02 pod pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0 container secret-env-test: 
STEP: delete the pod
Mar  6 04:06:32.273: INFO: Waiting for pod pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0 to disappear
Mar  6 04:06:32.275: INFO: Pod pod-secrets-307b5724-f555-4ab8-b25a-c279acbd0cb0 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:06:32.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5247" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4445,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:06:32.282: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename projected
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in projected-7874
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-c99a1dd4-04b0-4713-ac6e-a2b8ca64c3dc
STEP: Creating configMap with name cm-test-opt-upd-07b6dda0-b45c-4fc9-bbde-7f28e7faf19c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c99a1dd4-04b0-4713-ac6e-a2b8ca64c3dc
STEP: Updating configmap cm-test-opt-upd-07b6dda0-b45c-4fc9-bbde-7f28e7faf19c
STEP: Creating configMap with name cm-test-opt-create-7901be34-e89b-4c81-ab38-23002ce92d0b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:06:36.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7874" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4471,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:06:36.492: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename kubectl
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubectl-1537
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar  6 04:06:36.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-1537'
Mar  6 04:06:41.712: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Mar  6 04:06:41.712: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738
Mar  6 04:06:45.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-780690759 delete deployment e2e-test-httpd-deployment --namespace=kubectl-1537'
Mar  6 04:07:00.811: INFO: stderr: ""
Mar  6 04:07:00.811: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:07:00.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1537" for this suite.

• [SLOW TEST:24.328 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1729
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":256,"skipped":4471,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:07:00.821: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename emptydir
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-3829
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar  6 04:07:00.956: INFO: Waiting up to 5m0s for pod "pod-5ee4d443-6e3a-440e-bd64-b75432a2a175" in namespace "emptydir-3829" to be "success or failure"
Mar  6 04:07:00.958: INFO: Pod "pod-5ee4d443-6e3a-440e-bd64-b75432a2a175": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202436ms
Mar  6 04:07:02.961: INFO: Pod "pod-5ee4d443-6e3a-440e-bd64-b75432a2a175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004411013s
STEP: Saw pod success
Mar  6 04:07:02.961: INFO: Pod "pod-5ee4d443-6e3a-440e-bd64-b75432a2a175" satisfied condition "success or failure"
Mar  6 04:07:02.963: INFO: Trying to get logs from node worker02 pod pod-5ee4d443-6e3a-440e-bd64-b75432a2a175 container test-container: 
STEP: delete the pod
Mar  6 04:07:02.976: INFO: Waiting for pod pod-5ee4d443-6e3a-440e-bd64-b75432a2a175 to disappear
Mar  6 04:07:02.978: INFO: Pod pod-5ee4d443-6e3a-440e-bd64-b75432a2a175 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:07:02.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3829" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4531,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:07:02.985: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-473
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 04:07:03.415: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 04:07:06.450: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
Mar  6 04:07:16.523: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:07:26.643: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:07:36.744: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:07:46.845: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:07:56.865: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:07:56.865: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-473".
STEP: Found 6 events.
Mar  6 04:07:56.868: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-z6p4x: {default-scheduler } Scheduled: Successfully assigned webhook-473/sample-webhook-deployment-5f65f8c764-z6p4x to worker02
Mar  6 04:07:56.868: INFO: At 2020-03-06 04:07:03 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 04:07:56.868: INFO: At 2020-03-06 04:07:03 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-z6p4x
Mar  6 04:07:56.868: INFO: At 2020-03-06 04:07:04 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-z6p4x: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 04:07:56.868: INFO: At 2020-03-06 04:07:04 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-z6p4x: {kubelet worker02} Created: Created container sample-webhook
Mar  6 04:07:56.868: INFO: At 2020-03-06 04:07:04 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-z6p4x: {kubelet worker02} Started: Started container sample-webhook
Mar  6 04:07:56.870: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 04:07:56.870: INFO: sample-webhook-deployment-5f65f8c764-z6p4x  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:07:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:07:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:07:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:07:03 +0000 UTC  }]
Mar  6 04:07:56.870: INFO: 
Mar  6 04:07:56.873: INFO: 
Logging node info for node master01
Mar  6 04:07:56.874: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 33737 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:07:56.875: INFO: 
Logging kubelet events for node master01
Mar  6 04:07:56.876: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 04:07:56.887: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.887: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:07:56.887: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.887: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 04:07:56.887: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.887: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 04:07:56.887: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.887: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 04:07:56.887: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:07:56.887: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:07:56.887: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 04:07:56.887: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:56.887: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:07:56.887: INFO: 	Container kube-flannel ready: true, restart count 0
W0306 04:07:56.890223      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:07:56.908: INFO: 
Latency metrics for node master01
Mar  6 04:07:56.908: INFO: 
Logging node info for node master02
Mar  6 04:07:56.909: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 33723 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:07:56.910: INFO: 
Logging kubelet events for node master02
Mar  6 04:07:56.912: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 04:07:56.926: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:07:56.926: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:07:56.926: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:07:56.926: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Container coredns ready: true, restart count 0
Mar  6 04:07:56.926: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:07:56.926: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 04:07:56.926: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 04:07:56.926: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 04:07:56.926: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.926: INFO: 	Container kube-scheduler ready: true, restart count 1
W0306 04:07:56.929331      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:07:56.945: INFO: 
Latency metrics for node master02
Mar  6 04:07:56.945: INFO: 
Logging node info for node master03
Mar  6 04:07:56.947: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 33725 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:07:56.947: INFO: 
Logging kubelet events for node master03
Mar  6 04:07:56.949: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 04:07:56.960: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container coredns ready: true, restart count 0
Mar  6 04:07:56.960: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:07:56.960: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 04:07:56.960: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 04:07:56.960: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 04:07:56.960: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:07:56.960: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 04:07:56.960: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 04:07:56.960: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:07:56.960: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:07:56.960: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.960: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
W0306 04:07:56.962689      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:07:56.980: INFO: 
Latency metrics for node master03
Mar  6 04:07:56.980: INFO: 
Logging node info for node worker01
Mar  6 04:07:56.982: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 34084 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:07:56.982: INFO: 
Logging kubelet events for node worker01
Mar  6 04:07:56.984: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 04:07:56.995: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 04:07:56.995: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container contour ready: false, restart count 0
Mar  6 04:07:56.995: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container contour ready: false, restart count 0
Mar  6 04:07:56.995: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:07:56.995: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 04:07:56.995: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:07:56.995: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:07:56.995: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 04:07:56.995: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container kuard ready: true, restart count 0
Mar  6 04:07:56.995: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 04:07:56.995: INFO: 	Container envoy ready: false, restart count 0
Mar  6 04:07:56.995: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container contour ready: false, restart count 0
Mar  6 04:07:56.995: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container kuard ready: true, restart count 0
Mar  6 04:07:56.995: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:56.995: INFO: 	Container kuard ready: true, restart count 0
W0306 04:07:56.997740      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:07:57.020: INFO: 
Latency metrics for node worker01
Mar  6 04:07:57.020: INFO: 
Logging node info for node worker02
Mar  6 04:07:57.022: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 33877 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:03:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:03:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:03:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:03:18 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:07:57.022: INFO: 
Logging kubelet events for node worker02
Mar  6 04:07:57.024: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 04:07:57.028: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 04:07:57.028: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:07:57.028: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:07:57.028: INFO: sample-webhook-deployment-5f65f8c764-z6p4x started at 2020-03-06 04:07:03 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Container sample-webhook ready: true, restart count 0
Mar  6 04:07:57.028: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 04:07:57.028: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 04:07:57.028: INFO: 	Container envoy ready: false, restart count 0
Mar  6 04:07:57.028: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Container e2e ready: true, restart count 0
Mar  6 04:07:57.028: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 04:07:57.028: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:07:57.028: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:07:57.028: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 04:07:57.031302      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:07:57.059: INFO: 
Latency metrics for node worker02
Mar  6 04:07:57.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-473" for this suite.
STEP: Destroying namespace "webhook-473-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [54.143 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 04:07:56.865: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:608
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":257,"skipped":4534,"failed":19,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:07:57.129: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in custom-resource-definition-851
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Mar  6 04:07:57.274: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Mar  6 04:08:03.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-851" for this suite.

• [SLOW TEST:5.978 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":258,"skipped":4548,"failed":19,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Mar  6 04:08:03.107: INFO: >>> kubeConfig: /tmp/kubeconfig-780690759
STEP: Building a namespace api object, basename webhook
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in webhook-3247
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar  6 04:08:03.932: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar  6 04:08:06.949: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
Mar  6 04:08:16.968: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:08:27.077: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:08:37.177: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:08:47.278: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:08:57.288: INFO: Waiting for webhook configuration to be ready...
Mar  6 04:08:57.288: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0000b3950>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "webhook-3247".
STEP: Found 6 events.
Mar  6 04:08:57.291: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-gn9f8: {default-scheduler } Scheduled: Successfully assigned webhook-3247/sample-webhook-deployment-5f65f8c764-gn9f8 to worker02
Mar  6 04:08:57.291: INFO: At 2020-03-06 04:08:03 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1
Mar  6 04:08:57.291: INFO: At 2020-03-06 04:08:03 +0000 UTC - event for sample-webhook-deployment-5f65f8c764: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-5f65f8c764-gn9f8
Mar  6 04:08:57.291: INFO: At 2020-03-06 04:08:04 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-gn9f8: {kubelet worker02} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Mar  6 04:08:57.291: INFO: At 2020-03-06 04:08:04 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-gn9f8: {kubelet worker02} Created: Created container sample-webhook
Mar  6 04:08:57.291: INFO: At 2020-03-06 04:08:04 +0000 UTC - event for sample-webhook-deployment-5f65f8c764-gn9f8: {kubelet worker02} Started: Started container sample-webhook
Mar  6 04:08:57.293: INFO: POD                                         NODE      PHASE    GRACE  CONDITIONS
Mar  6 04:08:57.293: INFO: sample-webhook-deployment-5f65f8c764-gn9f8  worker02  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:08:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:08:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:08:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-06 04:08:03 +0000 UTC  }]
Mar  6 04:08:57.293: INFO: 
Mar  6 04:08:57.296: INFO: 
Logging node info for node master01
Mar  6 04:08:57.298: INFO: Node Info: &Node{ObjectMeta:{master01   /api/v1/nodes/master01 aeae8a5b-4e17-4702-bb02-bcfde6cdb12a 33737 0 2020-03-06 02:29:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master01 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"76:15:82:0d:8b:ab"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.247 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:29:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:04:07 +0000 UTC,LastTransitionTime:2020-03-06 02:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.247,},NodeAddress{Type:Hostname,Address:master01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:195205FE-EE72-4794-8EAA-AC554EFDEC9B,BootID:6a3bf627-7476-4f52-84fa-f3eab6d26427,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[192.168.1.252/library/k8s-keepalived@sha256:3db0032ef2feef675710595681cf9463470af179cd324c6773e831b6649ef785 192.168.1.252/library/k8s-keepalived:1.3.5],SizeBytes:356553439,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/node@sha256:3226b047a7034918a05c986347c5fb4d2cce6d0844f325851bfba586271ee617 192.168.1.252/library/node:v3.12.0],SizeBytes:257501722,},ContainerImage{Names:[192.168.1.252/library/cni@sha256:dc3bc525f1d3b794db1f2a7ceb7d8b84699d13e1431fbc117063f7e2075ff4b5 192.168.1.252/library/cni:v3.12.0],SizeBytes:206678344,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/envoy@sha256:b36ee021fc4d285de7861dbaee01e7437ce1d63814ead6ae3e4dfcad4a951b2e 192.168.1.252/library/envoy:v1.12.2],SizeBytes:170487454,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/pod2daemon-flexvol@sha256:2bf967507ad1adb749f3484b5d39e7d7b8700c4a0f836e8093dae5c57a585ccf 192.168.1.252/library/pod2daemon-flexvol:v3.12.0],SizeBytes:111122324,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/kube-controllers@sha256:edf14a5bcc663d2b0013b1830469626b7aa27206cbc7715ed83c042890ca5837 192.168.1.252/library/kube-controllers:v3.12.0],SizeBytes:56567983,},ContainerImage{Names:[192.168.1.252/library/typha@sha256:3baf9aef445a3224160748d6f560426eab798d6c65620020b2466e114bf6805f 192.168.1.252/library/typha:v3.12.0],SizeBytes:56034822,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/ctl@sha256:128e4c95cf92a482496d591c43cad2a6a21fab1f0e8a8f13e8503f1324106dc8 192.168.1.252/library/ctl:v3.12.0],SizeBytes:47895826,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/contour@sha256:3e10c69dfeaa830b84a50e6b47ce90e0f5a1aa84daf77f7662313077fa9579cf 192.168.1.252/library/contour:v1.1.0],SizeBytes:35721216,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:08:57.298: INFO: 
Logging kubelet events for node master01
Mar  6 04:08:57.300: INFO: 
Logging pods the kubelet thinks is on node master01
Mar  6 04:08:57.305: INFO: kube-flannel-ds-amd64-6mbnb started at 2020-03-06 02:30:22 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.305: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:08:57.305: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:08:57.306: INFO: kube-proxy-4j8ft started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.306: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:08:57.306: INFO: kube-apiserver-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.306: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 04:08:57.306: INFO: kube-controller-manager-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.306: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 04:08:57.306: INFO: kube-scheduler-master01 started at 2020-03-06 02:30:22 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.306: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 04:08:57.306: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-drhpn started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:08:57.306: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:08:57.306: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 04:08:57.309800      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:08:57.328: INFO: 
Latency metrics for node master01
Mar  6 04:08:57.328: INFO: 
Logging node info for node master02
Mar  6 04:08:57.330: INFO: Node Info: &Node{ObjectMeta:{master02   /api/v1/nodes/master02 6a0ecb6f-ef31-4754-858b-3eba76999224 33723 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master02 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"82:c1:38:99:3b:39"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.248 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:16 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.248,},NodeAddress{Type:Hostname,Address:master02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:8B2C4639-6D22-4D0D-A03C-F6D7E328F9D5,BootID:efd7329f-ae31-4806-ba13-7fdd5fad57df,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:08:57.330: INFO: 
Logging kubelet events for node master02
Mar  6 04:08:57.332: INFO: 
Logging pods the kubelet thinks is on node master02
Mar  6 04:08:57.337: INFO: kube-controller-manager-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 04:08:57.337: INFO: kube-scheduler-master02 started at 2020-03-06 02:29:23 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 04:08:57.337: INFO: kube-proxy-scdss started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:08:57.337: INFO: kube-flannel-ds-amd64-vfl78 started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:08:57.337: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:08:57.337: INFO: coredns-7795996659-phdkc started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Container coredns ready: true, restart count 0
Mar  6 04:08:57.337: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2zmwm started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:08:57.337: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 04:08:57.337: INFO: kube-apiserver-master02 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.337: INFO: 	Container kube-apiserver ready: true, restart count 0
W0306 04:08:57.341414      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:08:57.357: INFO: 
Latency metrics for node master02
Mar  6 04:08:57.357: INFO: 
Logging node info for node master03
Mar  6 04:08:57.359: INFO: Node Info: &Node{ObjectMeta:{master03   /api/v1/nodes/master03 c508ee4c-fe9d-4c73-a857-e57fba26fa86 33725 0 2020-03-06 02:29:17 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master03 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4a:aa:08:ea:16:90"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.249 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823226880 0} {} 3733620Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718369280 0} {} 3631220Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:29:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:04:04 +0000 UTC,LastTransitionTime:2020-03-06 02:30:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.249,},NodeAddress{Type:Hostname,Address:master03,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:0C8F0A67-EB7E-42DE-9418-0973FE735A08,BootID:05b1fa23-e6be-4032-bc93-8800264dff91,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[192.168.1.252/library/etcd-amd64@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216 192.168.1.252/library/etcd-amd64:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[192.168.1.252/library/kube-apiserver@sha256:4ee4113bcce32ae436b15364b09a2439dee15f6a19bbcf43a470d3dc879b0c62 192.168.1.252/library/kube-apiserver:v1.17.3],SizeBytes:170986003,},ContainerImage{Names:[192.168.1.252/library/kube-controller-manager@sha256:8f03391c0d22e3da8d22725178efe3c4338e1920504f7f4eb4a1c7f5f40c4c6e 192.168.1.252/library/kube-controller-manager:v1.17.3],SizeBytes:160918035,},ContainerImage{Names:[192.168.1.252/library/dashboard@sha256:4e0d39dae7e089b77fe2bbcef648f89905716db9c1f0884950bfd42d9f446c29 192.168.1.252/library/dashboard:v2.0.0-rc5],SizeBytes:126359420,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[192.168.1.252/library/kube-scheduler@sha256:eea535b58da5b2fb66b32c61c6913d014cf85061a39527ad9bca2fa84b53dc1c 192.168.1.252/library/kube-scheduler:v1.17.3],SizeBytes:94435859,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[192.168.1.252/library/k8s-haproxy@sha256:f88bca67a2782e7bd88af168d5c11216c32104d12fe9240fac54a1d3196e3f9c 192.168.1.252/library/k8s-haproxy:2.0.0],SizeBytes:73550856,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/coredns@sha256:608ac7ccba5ce41c6941fca13bc67059c1eef927fd968b554b790e21cc92543c 192.168.1.252/library/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[192.168.1.252/library/metrics-scraper@sha256:e24a74b3b1cdc84d6285d507a12eb06907fd8c457b3e8ae9baa9418eca43efc4 192.168.1.252/library/metrics-scraper:v1.0.3],SizeBytes:40105664,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d 192.168.1.252/library/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:08:57.359: INFO: 
Logging kubelet events for node master03
Mar  6 04:08:57.361: INFO: 
Logging pods the kubelet thinks is on node master03
Mar  6 04:08:57.366: INFO: kube-controller-manager-master03 started at 2020-03-06 02:29:50 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container kube-controller-manager ready: true, restart count 1
Mar  6 04:08:57.366: INFO: kube-flannel-ds-amd64-hs69k started at 2020-03-06 02:30:00 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:08:57.366: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:08:57.366: INFO: dashboard-metrics-scraper-56568cb9d7-d57kl started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container dashboard-metrics-scraper ready: true, restart count 0
Mar  6 04:08:57.366: INFO: kube-apiserver-master03 started at 2020-03-06 02:29:24 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar  6 04:08:57.366: INFO: kube-scheduler-master03 started at 2020-03-06 02:29:38 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container kube-scheduler ready: true, restart count 1
Mar  6 04:08:57.366: INFO: kube-proxy-stbnn started at 2020-03-06 02:30:00 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:08:57.366: INFO: kubernetes-dashboard-6647798d59-j2ms4 started at 2020-03-06 02:30:10 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
Mar  6 04:08:57.366: INFO: coredns-7795996659-cmq4d started at 2020-03-06 02:30:13 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container coredns ready: true, restart count 0
Mar  6 04:08:57.366: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-w5psq started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:08:57.366: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:08:57.366: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 04:08:57.369211      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:08:57.386: INFO: 
Latency metrics for node master03
Mar  6 04:08:57.386: INFO: 
Logging node info for node worker01
Mar  6 04:08:57.391: INFO: Node Info: &Node{ObjectMeta:{worker01   /api/v1/nodes/worker01 cf4203bb-1bfa-4b35-991f-935275b6bc46 34084 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker01 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"5a:49:f5:5b:74:b3"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.250 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:05:20 +0000 UTC,LastTransitionTime:2020-03-06 02:30:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.250,},NodeAddress{Type:Hostname,Address:worker01,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:5CA364AA-0FF8-4B57-BA86-F28699575F0D,BootID:c85ad0c4-ebcf-4d01-97f0-a36c1cfc50be,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[192.168.1.252/library/metrics-server-amd64@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b 192.168.1.252/library/metrics-server-amd64:v0.3.6],SizeBytes:39944451,},ContainerImage{Names:[192.168.1.252/library/kuard-amd64@sha256:bd17153e9a3319f401acc7a27759243f37d422c06cbbf01cb3e1f54bbbfe14f4 192.168.1.252/library/kuard-amd64:1],SizeBytes:19745911,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:08:57.392: INFO: 
Logging kubelet events for node worker01
Mar  6 04:08:57.395: INFO: 
Logging pods the kubelet thinks is on node worker01
Mar  6 04:08:57.399: INFO: kube-flannel-ds-amd64-xxhz9 started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:08:57.399: INFO: 	Container kube-flannel ready: true, restart count 1
Mar  6 04:08:57.399: INFO: envoy-lvmcb started at 2020-03-06 02:30:45 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 04:08:57.399: INFO: 	Container envoy ready: false, restart count 0
Mar  6 04:08:57.399: INFO: kuard-678c676f5d-tzsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container kuard ready: true, restart count 0
Mar  6 04:08:57.399: INFO: contour-certgen-82k46 started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container contour ready: false, restart count 0
Mar  6 04:08:57.399: INFO: kuard-678c676f5d-vsn86 started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container kuard ready: true, restart count 0
Mar  6 04:08:57.399: INFO: kuard-678c676f5d-m29b6 started at 2020-03-06 02:30:49 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container kuard ready: true, restart count 0
Mar  6 04:08:57.399: INFO: contour-54748c65f5-jl5wz started at 2020-03-06 02:30:46 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container contour ready: false, restart count 0
Mar  6 04:08:57.399: INFO: metrics-server-78799bf646-xrsnn started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container metrics-server ready: true, restart count 0
Mar  6 04:08:57.399: INFO: kube-proxy-kcb8f started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container kube-proxy ready: true, restart count 0
Mar  6 04:08:57.399: INFO: contour-54748c65f5-gk5sz started at 2020-03-06 02:30:51 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container contour ready: false, restart count 0
Mar  6 04:08:57.399: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-2bz8g started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:08:57.399: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:08:57.399: INFO: 	Container systemd-logs ready: true, restart count 0
W0306 04:08:57.407058      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:08:57.435: INFO: 
Latency metrics for node worker01
Mar  6 04:08:57.435: INFO: 
Logging node info for node worker02
Mar  6 04:08:57.438: INFO: Node Info: &Node{ObjectMeta:{worker02   /api/v1/nodes/worker02 f0994ba1-7e4e-4cc8-b3c8-25d34b25d9ce 35193 0 2020-03-06 02:30:30 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:worker02 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"1a:75:0a:e8:cc:76"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:192.168.1.251 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{42140479488 0} {} 41152812Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3823214592 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{37926431477 0} {} 37926431477 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3718356992 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-06 04:08:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-06 04:08:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-06 04:08:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-06 04:08:19 +0000 UTC,LastTransitionTime:2020-03-06 02:30:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.1.251,},NodeAddress{Type:Hostname,Address:worker02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:20200220105402131453637367482142,SystemUUID:EDBF7E33-228B-4233-93CF-7850B5A311E4,BootID:bd6a4f0f-5ddb-4585-83df-253b9292b617,KernelVersion:3.10.0-1062.12.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.17.3,KubeProxyVersion:v1.17.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-containers/conformance@sha256:502434491cbc3fac5d9a606a879e554cf881b2ba5b688bed25f2c33d3ff1c777 gcr.io/google-containers/conformance:v1.17.3],SizeBytes:575831882,},ContainerImage{Names:[sonobuoy/systemd-logs@sha256:fadad24a66ddd544987c38811108a73d1a306dd3b5e3f090b207786f2825ffde sonobuoy/systemd-logs:v0.3],SizeBytes:297365055,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[192.168.1.252/library/kube-proxy@sha256:8dfd672298f8fbcd37731e0f2e45be87199cee6ee1f482ffec766693dcec0ec4 192.168.1.252/library/kube-proxy:v1.17.3],SizeBytes:115964919,},ContainerImage{Names:[sonobuoy/sonobuoy@sha256:73f9cfe546ac6d0d5b94308293484c29c3b02fc341a975cfca97c80dd8728ed7 sonobuoy/sonobuoy:v0.17.2],SizeBytes:84339798,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[192.168.1.252/library/flannel@sha256:bd76b84c74ad70368a2341c2402841b75950df881388e43fc2aca000c546653a 192.168.1.252/library/flannel:v0.11.0-amd64],SizeBytes:52567296,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[192.168.1.252/library/pause@sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea 192.168.1.252/library/pause:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar  6 04:08:57.438: INFO: 
Logging kubelet events for node worker02
Mar  6 04:08:57.441: INFO: 
Logging pods the kubelet thinks is on node worker02
Mar  6 04:08:57.447: INFO: kube-flannel-ds-amd64-ztfzf started at 2020-03-06 02:30:30 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Init container install-cni ready: true, restart count 0
Mar  6 04:08:57.447: INFO: 	Container kube-flannel ready: true, restart count 0
Mar  6 04:08:57.447: INFO: sonobuoy started at 2020-03-06 02:38:02 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Container kube-sonobuoy ready: true, restart count 0
Mar  6 04:08:57.447: INFO: envoy-wgz76 started at 2020-03-06 02:30:55 +0000 UTC (1+1 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Init container envoy-initconfig ready: false, restart count 0
Mar  6 04:08:57.447: INFO: 	Container envoy ready: false, restart count 0
Mar  6 04:08:57.447: INFO: sonobuoy-e2e-job-67137ff64ac145d3 started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Container e2e ready: true, restart count 0
Mar  6 04:08:57.447: INFO: 	Container sonobuoy-worker ready: true, restart count 0
Mar  6 04:08:57.447: INFO: sonobuoy-systemd-logs-daemon-set-2e5dbce20e154397-bpjtd started at 2020-03-06 02:38:12 +0000 UTC (0+2 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Container sonobuoy-worker ready: true, restart count 1
Mar  6 04:08:57.447: INFO: 	Container systemd-logs ready: true, restart count 0
Mar  6 04:08:57.447: INFO: kube-proxy-5xxdb started at 2020-03-06 02:30:30 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Container kube-proxy ready: true, restart count 1
Mar  6 04:08:57.447: INFO: sample-webhook-deployment-5f65f8c764-gn9f8 started at 2020-03-06 04:08:03 +0000 UTC (0+1 container statuses recorded)
Mar  6 04:08:57.447: INFO: 	Container sample-webhook ready: true, restart count 0
W0306 04:08:57.451371      19 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar  6 04:08:57.472: INFO: 
Latency metrics for node worker02
Mar  6 04:08:57.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3247" for this suite.
STEP: Destroying namespace "webhook-3247-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• Failure [54.440 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance] [It]
  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

  Mar  6 04:08:57.288: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0000b3950>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:990
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":258,"skipped":4549,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
SSSSSSSSSSSSSSSSMar  6 04:08:57.547: INFO: Running AfterSuite actions on all nodes
Mar  6 04:08:57.547: INFO: Running AfterSuite actions on node 1
Mar  6 04:08:57.547: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":258,"skipped":4565,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}


Summarizing 20 Failures:

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should honor timeout [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2225

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource with different stored version [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate pod and apply defaults after mutation [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should deny crd creation [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2096

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should unconditionally reject operations on fail closed webhook [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1303

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny custom resource creation, update and deletion [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1788

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny attaching pod [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:963

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [It] should be able to convert a non homogeneous list of CRs [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:493

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] patching/updating a mutating webhook should work [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:528

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1389

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [It] should be able to convert from CR v1 to CR v2 [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:493

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource with pruning [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny pod and configmap creation [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:911

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1865

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] listing mutating webhooks should work [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:682

[Fail] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [It] updates the published spec when one version gets renamed [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_publish_openapi.go:402

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] patching/updating a validating webhook should work [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432

[Fail] [sig-api-machinery] Aggregator [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:391

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] listing validating webhooks should work [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:608

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate configmap [Conformance] 
/workspace/anago-v1.17.3-beta.0.40+c94b9acd4b784f/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:990

Ran 278 of 4843 Specs in 5397.901 seconds
FAIL! -- 258 Passed | 20 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (5397.98s)
FAIL

Ginkgo ran 1 suite in 1h29m59.207711348s
Test Suite Failed