Skip to content

Instantly share code, notes, and snippets.

@heartonbit
Last active May 4, 2018 04:16
Show Gist options
  • Select an option

  • Save heartonbit/e230b2b58db84a9c5247009a3c2e0fd8 to your computer and use it in GitHub Desktop.

Select an option

Save heartonbit/e230b2b58db84a9c5247009a3c2e0fd8 to your computer and use it in GitHub Desktop.

Revisions

  1. heartonbit revised this gist May 4, 2018. 1 changed file with 4 additions and 14 deletions.
    18 changes: 4 additions & 14 deletions python_concurrent
    Original file line number Diff line number Diff line change
    @@ -1,21 +1,11 @@
    import concurrent.futures
    import urllib.request

    URLS = ['http://www.foxnews.com/',
    'http://www.cnn.com/',
    'http://europe.wsj.com/',
    'http://www.bbc.co.uk/',
    'http://some-made-up-domain.com/']
    def do(params):
    return "hello"

    # Retrieve a single page and report the URL and contents
    def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
    return conn.read()

    # We can use a with statement to ensure threads are cleaned up promptly
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    future = executor.submit(do, params)

    for future in concurrent.futures.as_completed(future_to_url):
    url = future_to_url[future]
    try:
  2. heartonbit created this gist May 4, 2018.
    26 changes: 26 additions & 0 deletions python_concurrent
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,26 @@
    import concurrent.futures
    import urllib.request

    URLS = ['http://www.foxnews.com/',
    'http://www.cnn.com/',
    'http://europe.wsj.com/',
    'http://www.bbc.co.uk/',
    'http://some-made-up-domain.com/']

    # Retrieve a single page and report the URL and contents
    def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
    return conn.read()

    # We can use a with statement to ensure threads are cleaned up promptly
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
    url = future_to_url[future]
    try:
    data = future.result()
    except Exception as exc:
    print('%r generated an exception: %s' % (url, exc))
    else:
    print('%r page is %d bytes' % (url, len(data)))