Created
May 1, 2020 02:58
-
-
Save bszeti/ed9564cb7287c2bf8c936a9d8e40667d to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| 0.0 TEL | Telepresence 0.105 launched at Thu Apr 30 22:39:12 2020 | |
| 0.0 TEL | /usr/local/bin/telepresence | |
| 0.0 TEL | uname: uname_result(system='Darwin', node='bszeti-mac', release='19.3.0', version='Darwin Kernel Version 19.3.0: Thu Jan 9 20:58:23 PST 2020; root:xnu-6153.81.5~1/RELEASE_X86_64', machine='x86_64', processor='i386') | |
| 0.0 TEL | Platform: darwin | |
| 0.0 TEL | WSL: False | |
| 0.0 TEL | Python 3.7.7 (default, Mar 10 2020, 15:43:33) | |
| 0.0 TEL | [Clang 11.0.0 (clang-1100.0.33.17)] | |
| 0.0 TEL | BEGIN SPAN main.py:40(main) | |
| 0.0 TEL | BEGIN SPAN startup.py:83(set_kube_command) | |
| 0.0 TEL | Found kubectl -> /usr/local/bin/kubectl | |
| 0.0 TEL | Found oc -> /usr/local/bin/oc | |
| 0.0 TEL | [1] Capturing: kubectl config current-context | |
| 0.1 TEL | [1] captured in 0.07 secs. | |
| 0.1 TEL | [2] Capturing: kubectl --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr version --short | |
| 0.4 TEL | [2] captured in 0.33 secs. | |
| 0.4 TEL | [3] Capturing: kubectl --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr config view -o json | |
| 0.5 TEL | [3] captured in 0.08 secs. | |
| 0.5 TEL | [4] Capturing: kubectl --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr get ns t105-4 | |
| 1.0 TEL | [4] captured in 0.46 secs. | |
| 1.0 TEL | [5] Capturing: kubectl --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr api-versions | |
| 1.3 TEL | [5] captured in 0.35 secs. | |
| 1.3 TEL | Command: oc 1.10.11 | |
| 1.3 TEL | Context: t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr, namespace: t105-4, version: 1.11.0+d4cacc0 | |
| 1.3 TEL | [6] Capturing: minishift ip | |
| 1.7 TEL | [6] captured in 0.35 secs. | |
| 1.7 TEL | END SPAN startup.py:83(set_kube_command) 1.6s | |
| 1.7 TEL | Found ssh -> /usr/bin/ssh | |
| 1.7 TEL | [7] Capturing: ssh -V | |
| 1.7 TEL | [7] captured in 0.01 secs. | |
| 1.7 TEL | Found bash -> /bin/bash | |
| 1.7 TEL | Found sshuttle-telepresence -> /usr/local/Cellar/telepresence/0.105/libexec/sshuttle-telepresence | |
| 1.7 TEL | Found pfctl -> /sbin/pfctl | |
| 1.7 TEL | Found sudo -> /usr/bin/sudo | |
| 1.7 TEL | [8] Running: sudo -n echo -n | |
| 1.7 TEL | [8] ran in 0.03 secs. | |
| 1.7 >>> | Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html | |
| 1.7 TEL | Found sshfs -> /usr/local/bin/sshfs | |
| 1.7 TEL | Found umount -> /sbin/umount | |
| 1.7 >>> | Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details. | |
| 1.7 TEL | [9] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get pods telepresence-connectivity-check --ignore-not-found | |
| 2.1 TEL | [9] ran in 0.39 secs. | |
| 2.3 TEL | Scout info: {'latest_version': '0.105', 'application': 'telepresence', 'notices': []} | |
| 2.3 TEL | BEGIN SPAN deployment.py:182(create_new_deployment) | |
| 2.3 >>> | Starting network proxy to cluster using new Deployment telepresence-1588300752-512629-9023 | |
| 2.3 TEL | [10] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete --ignore-not-found svc,deploy --selector=telepresence=f7c1375e27d145fd8352f3f22bfa4eff | |
| 2.8 10 | No resources found | |
| 2.8 TEL | [10] ran in 0.45 secs. | |
| 2.8 TEL | [11] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create -f - | |
| 3.4 11 | deployment.apps/telepresence-1588300752-512629-9023 created | |
| 3.4 TEL | [11] ran in 0.58 secs. | |
| 3.4 TEL | END SPAN deployment.py:182(create_new_deployment) 1.0s | |
| 3.4 TEL | BEGIN SPAN remote.py:142(get_remote_info) | |
| 3.4 TEL | BEGIN SPAN remote.py:75(get_deployment_json) | |
| 3.4 TEL | [12] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get deployment -o json --selector=telepresence=f7c1375e27d145fd8352f3f22bfa4eff | |
| 4.0 TEL | [12] captured in 0.61 secs. | |
| 4.0 TEL | END SPAN remote.py:75(get_deployment_json) 0.6s | |
| 4.0 TEL | Searching for Telepresence pod: | |
| 4.0 TEL | with name telepresence-1588300752-512629-9023-* | |
| 4.0 TEL | with labels {'telepresence': 'f7c1375e27d145fd8352f3f22bfa4eff'} | |
| 4.0 TEL | [13] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get pod -o json --selector=telepresence=f7c1375e27d145fd8352f3f22bfa4eff | |
| 4.5 TEL | [13] captured in 0.54 secs. | |
| 4.5 TEL | Checking telepresence-1588300752-512629-9023-669fc5989f-pxxzx | |
| 4.5 TEL | Looks like we've found our pod! | |
| 4.5 TEL | BEGIN SPAN remote.py:104(wait_for_pod) | |
| 4.5 TEL | [14] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get pod telepresence-1588300752-512629-9023-669fc5989f-pxxzx -o json | |
| 5.3 TEL | [14] captured in 0.76 secs. | |
| 5.5 TEL | [15] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get pod telepresence-1588300752-512629-9023-669fc5989f-pxxzx -o json | |
| 6.2 TEL | [15] captured in 0.61 secs. | |
| 6.2 TEL | END SPAN remote.py:104(wait_for_pod) 1.6s | |
| 6.2 TEL | END SPAN remote.py:142(get_remote_info) 2.8s | |
| 6.2 TEL | BEGIN SPAN connect.py:37(connect) | |
| 6.2 TEL | [16] Launching kubectl logs: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 logs -f telepresence-1588300752-512629-9023-669fc5989f-pxxzx --container telepresence-1588300752-512629-9023 --tail=10 | |
| 6.2 TEL | [17] Launching kubectl port-forward: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 port-forward telepresence-1588300752-512629-9023-669fc5989f-pxxzx 57316:8022 | |
| 6.2 TEL | [18] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 6.2 TEL | [18] exit 255 in 0.02 secs. | |
| 6.4 TEL | [19] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 6.5 TEL | [19] exit 255 in 0.02 secs. | |
| 6.7 TEL | [20] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 6.7 TEL | [20] exit 255 in 0.02 secs. | |
| 7.0 TEL | [21] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 7.0 TEL | [21] exit 255 in 0.02 secs. | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] Loading ./forwarder.py... | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] /etc/resolv.conf changed, reparsing | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] Resolver added ('192.168.0.71', 53) to server list | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] SOCKSv5Factory starting on 9050 | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7f2952dcd278> | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] DNSDatagramProtocol starting on 9053 | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7f2952dcd4e0> | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [-] Loaded. | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 20.3.0 (/opt/rh/rh-python36/root/usr/bin/python3 3.6.9) starting up. | |
| 7.2 16 | 2020-05-01T02:39:18+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor. | |
| 7.3 TEL | [22] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 7.3 TEL | [22] exit 255 in 0.02 secs. | |
| 7.5 TEL | [23] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 7.6 TEL | [23] exit 255 in 0.02 secs. | |
| 7.6 17 | Forwarding from 127.0.0.1:57316 -> 8022 | |
| 7.6 17 | Forwarding from [::1]:57316 -> 8022 | |
| 7.8 TEL | [24] Running: ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 /bin/true | |
| 7.8 17 | Handling connection for 57316 | |
| 9.3 TEL | [24] ran in 1.50 secs. | |
| 9.3 >>> | | |
| 9.3 >>> | No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to forward. | |
| 9.3 >>> | | |
| 9.3 TEL | Launching Web server for proxy poll | |
| 9.3 TEL | [25] Launching SSH port forward (socks and proxy poll): ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 -L127.0.0.1:57331:127.0.0.1:9050 -R9055:127.0.0.1:57332 | |
| 9.3 TEL | END SPAN connect.py:37(connect) 3.2s | |
| 9.3 TEL | BEGIN SPAN remote_env.py:29(get_remote_env) | |
| 9.3 TEL | [26] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 exec telepresence-1588300752-512629-9023-669fc5989f-pxxzx --container telepresence-1588300752-512629-9023 -- python3 podinfo.py | |
| 9.4 17 | Handling connection for 57316 | |
| 10.0 TEL | [26] captured in 0.64 secs. | |
| 10.0 TEL | END SPAN remote_env.py:29(get_remote_env) 0.6s | |
| 10.0 TEL | BEGIN SPAN mount.py:30(mount_remote_volumes) | |
| 10.0 TEL | [27] Running: sshfs -p 57316 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null telepresence@127.0.0.1:/ /tmp/tel-vj1crxg7/fs | |
| 10.1 17 | Handling connection for 57316 | |
| 11.9 TEL | [27] ran in 1.93 secs. | |
| 11.9 TEL | END SPAN mount.py:30(mount_remote_volumes) 1.9s | |
| 11.9 TEL | BEGIN SPAN vpn.py:280(connect_sshuttle) | |
| 11.9 TEL | BEGIN SPAN vpn.py:77(get_proxy_cidrs) | |
| 11.9 TEL | [28] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get nodes -o json | |
| 12.8 TEL | [28] captured in 0.94 secs. | |
| 12.8 TEL | [29] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get pods --all-namespaces -o json | |
| 14.0 TEL | [29] captured in 1.13 secs. | |
| 14.0 TEL | [30] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get services -o json | |
| 14.4 TEL | [30] captured in 0.38 secs. | |
| 14.4 TEL | [31] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300766-880924-9023 --tcp=3000 | |
| 14.7 31 | service/telepresence-1588300766-880924-9023 created | |
| 14.7 TEL | [31] ran in 0.38 secs. | |
| 14.7 TEL | [32] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300767-2593641-9023 --tcp=3000 | |
| 15.1 32 | service/telepresence-1588300767-2593641-9023 created | |
| 15.1 TEL | [32] ran in 0.38 secs. | |
| 15.1 TEL | [33] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300767-639189-9023 --tcp=3000 | |
| 15.5 33 | service/telepresence-1588300767-639189-9023 created | |
| 15.5 TEL | [33] ran in 0.36 secs. | |
| 15.5 TEL | [34] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300768-001864-9023 --tcp=3000 | |
| 15.8 34 | service/telepresence-1588300768-001864-9023 created | |
| 15.8 TEL | [34] ran in 0.36 secs. | |
| 15.9 TEL | [35] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300768-363914-9023 --tcp=3000 | |
| 16.2 35 | service/telepresence-1588300768-363914-9023 created | |
| 16.2 TEL | [35] ran in 0.38 secs. | |
| 16.2 TEL | [36] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300768-74653-9023 --tcp=3000 | |
| 16.6 36 | service/telepresence-1588300768-74653-9023 created | |
| 16.6 TEL | [36] ran in 0.39 secs. | |
| 16.6 TEL | [37] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300769-13762-9023 --tcp=3000 | |
| 17.0 37 | service/telepresence-1588300769-13762-9023 created | |
| 17.0 TEL | [37] ran in 0.35 secs. | |
| 17.0 TEL | [38] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 create service clusterip telepresence-1588300769-488477-9023 --tcp=3000 | |
| 17.4 38 | service/telepresence-1588300769-488477-9023 created | |
| 17.4 TEL | [38] ran in 0.39 secs. | |
| 17.4 TEL | [39] Capturing: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 get services -o json | |
| 17.9 TEL | [39] captured in 0.50 secs. | |
| 17.9 TEL | [40] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300766-880924-9023 | |
| 18.3 40 | service "telepresence-1588300766-880924-9023" deleted | |
| 18.3 TEL | [40] ran in 0.42 secs. | |
| 18.3 TEL | [41] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300767-2593641-9023 | |
| 18.7 41 | service "telepresence-1588300767-2593641-9023" deleted | |
| 18.8 TEL | [41] ran in 0.48 secs. | |
| 18.8 TEL | [42] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300767-639189-9023 | |
| 19.2 42 | service "telepresence-1588300767-639189-9023" deleted | |
| 19.2 TEL | [42] ran in 0.47 secs. | |
| 19.2 TEL | [43] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300768-001864-9023 | |
| 19.7 43 | service "telepresence-1588300768-001864-9023" deleted | |
| 19.7 TEL | [43] ran in 0.48 secs. | |
| 19.7 TEL | [44] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300768-363914-9023 | |
| 20.1 44 | service "telepresence-1588300768-363914-9023" deleted | |
| 20.2 TEL | [44] ran in 0.45 secs. | |
| 20.2 TEL | [45] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300768-74653-9023 | |
| 20.6 45 | service "telepresence-1588300768-74653-9023" deleted | |
| 20.6 TEL | [45] ran in 0.47 secs. | |
| 20.6 TEL | [46] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300769-13762-9023 | |
| 21.1 46 | service "telepresence-1588300769-13762-9023" deleted | |
| 21.1 TEL | [46] ran in 0.43 secs. | |
| 21.1 TEL | [47] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete service telepresence-1588300769-488477-9023 | |
| 21.5 47 | service "telepresence-1588300769-488477-9023" deleted | |
| 21.5 TEL | [47] ran in 0.44 secs. | |
| 21.5 >>> | Guessing that Services IP range is 172.30.0.0/16. Services started after this point will be inaccessible if are outside this range; restart telepresence if you can't access a new Service. | |
| 21.5 TEL | END SPAN vpn.py:77(get_proxy_cidrs) 9.6s | |
| 21.5 TEL | [48] Launching sshuttle: sshuttle-telepresence -v --dns --method auto -e 'ssh -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null' -r telepresence@127.0.0.1:57316 --to-ns 127.0.0.1:9053 0.0.0.0/0 172.30.0.0/16 | |
| 21.6 TEL | BEGIN SPAN vpn.py:303(connect_sshuttle,sshuttle-wait) | |
| 21.6 TEL | Wait for vpn-tcp connection: hellotelepresence-0 | |
| 21.6 TEL | [49] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-0")' | |
| 21.9 TEL | [49] captured in 0.28 secs. | |
| 21.9 TEL | Resolved hellotelepresence-0. 2 more... | |
| 21.9 TEL | [50] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-0.a.sanity.check.telepresence.io")' | |
| 22.0 TEL | [50] captured in 0.13 secs. | |
| 22.1 TEL | Wait for vpn-tcp connection: hellotelepresence-1 | |
| 22.1 TEL | [51] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-1")' | |
| 22.3 TEL | [51] captured in 0.18 secs. | |
| 22.3 TEL | Resolved hellotelepresence-1. 1 more... | |
| 22.3 TEL | [52] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-1.a.sanity.check.telepresence.io")' | |
| 22.4 48 | Starting sshuttle proxy. | |
| 22.4 TEL | [52] captured in 0.12 secs. | |
| 22.5 TEL | Wait for vpn-tcp connection: hellotelepresence-2 | |
| 22.5 TEL | [53] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-2")' | |
| 22.7 TEL | [53] captured in 0.23 secs. | |
| 22.7 TEL | Resolved hellotelepresence-2. 0 more... | |
| 22.7 TEL | END SPAN vpn.py:303(connect_sshuttle,sshuttle-wait) 1.1s | |
| 22.7 TEL | END SPAN vpn.py:280(connect_sshuttle) 10.8s | |
| 22.7 >>> | Connected. Flushing DNS cache. | |
| 22.7 TEL | [54] Running: sudo -n /usr/bin/pkill -HUP mDNSResponder | |
| 22.8 TEL | [54] ran in 0.05 secs. | |
| 24.1 >>> | Setup complete. Launching your command. | |
| 24.3 TEL | Everything launched. Waiting to exit... | |
| 24.3 TEL | BEGIN SPAN runner.py:726(wait_for_exit) | |
| 25.3 48 | firewall manager: Starting firewall with Python version 3.7.7 | |
| 25.4 48 | firewall manager: ready method name pf. | |
| 25.4 48 | IPv6 enabled: True | |
| 25.4 48 | UDP enabled: False | |
| 25.4 48 | DNS enabled: True | |
| 25.4 48 | TCP redirector listening on ('::1', 12300, 0, 0). | |
| 25.4 48 | TCP redirector listening on ('127.0.0.1', 12300). | |
| 25.4 48 | DNS listening on ('::1', 12300, 0, 0). | |
| 25.4 48 | DNS listening on ('127.0.0.1', 12300). | |
| 25.4 48 | Starting client with Python version 3.7.7 | |
| 25.4 48 | c : connecting to server... | |
| 25.4 17 | Handling connection for 57316 | |
| 25.5 48 | Warning: Permanently added '[127.0.0.1]:57316' (ECDSA) to the list of known hosts. | |
| 26.8 48 | Starting server with Python version 2.7.5 | |
| 26.8 48 | s: latency control setting = True | |
| 26.8 48 | s: WARNING: Neither ip nor netstat were found on the server. | |
| 26.8 48 | s: available routes: | |
| 26.8 48 | c : Connected. | |
| 26.8 48 | firewall manager: setting up. | |
| 26.8 48 | >> pfctl -s Interfaces -i lo -v | |
| 26.8 48 | >> pfctl -s all | |
| 26.8 48 | >> pfctl -a sshuttle6-12300 -f /dev/stdin | |
| 26.8 48 | >> pfctl -E | |
| 26.8 48 | >> pfctl -s Interfaces -i lo -v | |
| 26.8 48 | >> pfctl -s all | |
| 26.8 48 | >> pfctl -a sshuttle-12300 -f /dev/stdin | |
| 26.8 48 | >> pfctl -E | |
| 30.7 48 | c : DNS request from ('10.12.10.214', 60857) to None: 102 bytes | |
| 31.7 48 | c : DNS request from ('10.12.10.214', 60857) to None: 102 bytes | |
| 31.8 TEL | [55] Running: sudo -n echo -n | |
| 31.9 TEL | [55] ran in 0.03 secs. | |
| 32.9 48 | c : Accept TCP: 10.12.10.214:57428 -> 52.216.99.157:443. | |
| 33.7 48 | c : DNS request from ('10.12.10.214', 60857) to None: 102 bytes | |
| 36.7 48 | c : DNS request from ('10.12.10.214', 55117) to None: 37 bytes | |
| 37.4 TEL | [25] SSH port forward (socks and proxy poll): exit 255 | |
| 37.5 TEL | END SPAN runner.py:726(wait_for_exit) 13.2s | |
| 37.5 >>> | | |
| 37.5 >>> | Background process (SSH port forward (socks and proxy poll)) exited with return code 255. Command was: | |
| 37.5 >>> | ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 -L127.0.0.1:57331:127.0.0.1:9050 -R9055:127.0.0.1:57332 | |
| 37.5 >>> | | |
| 37.5 >>> | | |
| 37.5 >>> | Proxy to Kubernetes exited. This is typically due to a lost connection. | |
| 37.5 >>> | | |
| 37.5 TEL | EXITING with status code 255 | |
| 37.5 >>> | Exit cleanup in progress | |
| 37.5 TEL | (Cleanup) Terminate local process | |
| 37.5 TEL | Killing local process... | |
| 37.7 48 | c : DNS request from ('10.12.10.214', 60857) to None: 102 bytes | |
| 37.7 48 | c : DNS request from ('10.12.10.214', 55117) to None: 37 bytes | |
| 38.2 16 | 2020-05-01T02:39:50+0000 [Poll#error] Failed to contact Telepresence client: | |
| 38.2 16 | 2020-05-01T02:39:50+0000 [Poll#error] [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion.>] | |
| 38.2 16 | 2020-05-01T02:39:50+0000 [Poll#warn] Perhaps it's time to exit? | |
| 38.5 TEL | Main process (bash --norc) | |
| 38.5 TEL | exited with code -9. | |
| 38.5 TEL | (Cleanup) Kill BG process [48] sshuttle | |
| 38.5 48 | >> pfctl -a sshuttle6-12300 -F all | |
| 38.5 TEL | (Cleanup) Unmount remote filesystem | |
| 38.5 TEL | [56] Running: umount -f /tmp/tel-vj1crxg7/fs | |
| 38.5 48 | >> pfctl -X 13218859338845596859 | |
| 38.5 48 | >> pfctl -a sshuttle-12300 -F all | |
| 38.5 48 | >> pfctl -X 13218859338845592635 | |
| 38.7 17 | Handling connection for 57316 | |
| 40.1 56 | umount: /tmp/tel-vj1crxg7/fs: not currently mounted | |
| 40.1 TEL | [56] exit 1 in 1.54 secs. | |
| 40.1 TEL | (Cleanup) Unmount remote filesystem failed: | |
| 40.1 TEL | (Cleanup) Command '['umount', '-f', '/tmp/tel-vj1crxg7/fs']' returned non-zero exit status 1. | |
| 40.1 TEL | (Cleanup) Kill BG process [25] SSH port forward (socks and proxy poll) | |
| 40.1 TEL | (Cleanup) Kill Web server for proxy poll | |
| 40.5 TEL | (Cleanup) Kill BG process [17] kubectl port-forward | |
| 40.5 TEL | (Cleanup) Kill BG process [16] kubectl logs | |
| 40.5 TEL | [17] kubectl port-forward: exit -15 | |
| 40.5 TEL | (Cleanup) Delete new deployment | |
| 40.5 TEL | [16] kubectl logs: exit -15 | |
| 40.5 >>> | Cleaning up Deployment telepresence-1588300752-512629-9023 | |
| 40.5 TEL | Background process (kubectl logs) exited with return code -15. Command was: | |
| 40.5 TEL | oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 logs -f telepresence-1588300752-512629-9023-669fc5989f-pxxzx --container telepresence-1588300752-512629-9023 --tail=10 | |
| 40.5 TEL | [57] Running: oc --context t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr --namespace t105-4 delete --ignore-not-found svc,deploy --selector=telepresence=f7c1375e27d145fd8352f3f22bfa4eff | |
| 40.5 TEL | | |
| 40.5 TEL | Recent output was: | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [-] SOCKSv5Factory starting on 9050 | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7f2952dcd278> | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [-] DNSDatagramProtocol starting on 9053 | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7f2952dcd4e0> | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [-] Loaded. | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 20.3.0 (/opt/rh/rh-python36/root/usr/bin/python3 3.6.9) starting up. | |
| 40.5 TEL | 2020-05-01T02:39:18+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor. | |
| 40.5 TEL | 2020-05-01T02:39:50+0000 [Poll#error] Failed to contact Telepresence client: | |
| 40.5 TEL | 2020-05-01T02:39:50+0000 [Poll#error] [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion.>] | |
| 40.5 TEL | 2020-05-01T02:39:50+0000 [Poll#warn] Perhaps it's time to exit? | |
| 41.2 48 | packet_write_wait: Connection to 127.0.0.1 port 57316: Broken pipe | |
| 41.2 TEL | [48] sshuttle: exit -15 | |
| 41.6 57 | deployment.extensions "telepresence-1588300752-512629-9023" deleted | |
| 41.6 TEL | [57] ran in 1.13 secs. | |
| 41.6 TEL | (Cleanup) Kill sudo privileges holder | |
| 41.6 TEL | (Cleanup) Stop time tracking | |
| 41.6 TEL | END SPAN main.py:40(main) 41.6s | |
| 41.6 TEL | (Cleanup) Remove temporary directory | |
| 41.8 TEL | (Cleanup) Save caches | |
| 41.9 TEL | (sudo privileges holder thread exiting) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| ssh-keygen: generating new host keys: RSA1 RSA DSA ECDSA ED25519 | |
| Retrieving this pod's namespace from the process environment | |
| Failed: TELEPRESENCE_CONTAINER_NAMESPACE not set | |
| Reading this pod's namespace from the k8s service account | |
| Pod's namespace is 't105-4' | |
| Listening... | |
| 2020-05-01T02:39:18+0000 [-] Loading ./forwarder.py... | |
| 2020-05-01T02:39:18+0000 [-] /etc/resolv.conf changed, reparsing | |
| 2020-05-01T02:39:18+0000 [-] Resolver added ('192.168.0.71', 53) to server list | |
| 2020-05-01T02:39:18+0000 [-] SOCKSv5Factory starting on 9050 | |
| 2020-05-01T02:39:18+0000 [socks.SOCKSv5Factory#info] Starting factory <socks.SOCKSv5Factory object at 0x7f2952dcd278> | |
| 2020-05-01T02:39:18+0000 [-] DNSDatagramProtocol starting on 9053 | |
| 2020-05-01T02:39:18+0000 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7f2952dcd4e0> | |
| 2020-05-01T02:39:18+0000 [-] Loaded. | |
| 2020-05-01T02:39:18+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 20.3.0 (/opt/rh/rh-python36/root/usr/bin/python3 3.6.9) starting up. | |
| 2020-05-01T02:39:18+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor. | |
| 2020-05-01T02:39:50+0000 [Poll#error] Failed to contact Telepresence client: | |
| 2020-05-01T02:39:50+0000 [Poll#error] [<twisted.python.failure.Failure twisted.internet.error.ConnectionLost: Connection to the other side was lost in a non-clean fashion.>] | |
| 2020-05-01T02:39:50+0000 [Poll#warn] Perhaps it's time to exit? | |
| 2020-05-01T02:39:54+0000 [-] Received SIGTERM, shutting down. | |
| 2020-05-01T02:39:54+0000 [DNSDatagramProtocol (UDP)] (UDP Port 9053 Closed) | |
| 2020-05-01T02:39:54+0000 [DNSDatagramProtocol (UDP)] Stopping protocol <twisted.names.dns.DNSDatagramProtocol object at 0x7f2952dcd4e0> | |
| 2020-05-01T02:39:54+0000 [socks.SOCKSv5Factory] (TCP Port 9050 Closed) | |
| 2020-05-01T02:39:54+0000 [socks.SOCKSv5Factory#info] Stopping factory <socks.SOCKSv5Factory object at 0x7f2952dcd278> | |
| 2020-05-01T02:39:54+0000 [-] Main loop terminated. | |
| 2020-05-01T02:39:54+0000 [twisted.scripts._twistd_unix.UnixAppLogger#info] Server Shut Down. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| $ telepresence | |
| T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and | |
| T: headless services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html | |
| T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details. | |
| T: Starting network proxy to cluster using new Deployment telepresence-1588300752-512629-9023 | |
| T: No traffic is being forwarded from the remote Deployment to your local machine. You can use the --expose option to specify which ports you want to forward. | |
| T: Guessing that Services IP range is 172.30.0.0/16. Services started after this point will be inaccessible if are outside this range; restart telepresence if you can't access a new Service. | |
| T: Connected. Flushing DNS cache. | |
| T: Setup complete. Launching your command. | |
| The default interactive shell is now zsh. | |
| To update your account to use zsh, please run `chsh -s /bin/zsh`. | |
| For more details, please visit https://support.apple.com/kb/HT208050. | |
| @t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr|bash-3.2$ | |
| @t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr|bash-3.2$ | |
| @t105-4/master-newyork-3305-open-redhat-com:443/opentlc-mgr|bash-3.2$ | |
| T: Background process (SSH port forward (socks and proxy poll)) exited with return code 255. Command was: | |
| T: ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -q -p 57316 telepresence@127.0.0.1 -L127.0.0.1:57331:127.0.0.1:9050 -R9055:127.0.0.1:57332 | |
| T: Proxy to Kubernetes exited. This is typically due to a lost connection. | |
| T: Exit cleanup in progress | |
| T: Cleaning up Deployment telepresence-1588300752-512629-9023 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment