concepts
- forward and backward propagation
- vanishing gradient
- image convolution operation
- feature map, filter/kernel
- receptive field
- embedding
- translation invariance
See how a minor change to your commit message style can make a difference.
git commit -m"<type>(<optional scope>): <description>" \ -m"<optional body>" \ -m"<optional footer>"
| ################################################### | |
| ## | |
| ## Alertmanager YAML configuration for routing. | |
| ## | |
| ## Will route alerts with a code_owner label to the slack-code-owners receiver | |
| ## configured above, but will continue processing them to send to both a | |
| ## central Slack channel (slack-monitoring) and PagerDuty receivers | |
| ## (pd-warning and pd-critical) | |
| ## |
| version: '2' | |
| services: | |
| zookeeper: | |
| image: "confluentinc/cp-zookeeper:4.1.0" | |
| hostname: zookeeper | |
| ports: | |
| - "2181:2181" | |
| environment: | |
| ZOOKEEPER_CLIENT_PORT: 2181 |
Kafka 0.11.0.0 (Confluent 3.3.0) added support to manipulate offsets for a consumer group via cli kafka-consumer-groups command.
kafka-consumer-groups --bootstrap-server <kafkahost:port> --group <group_id> --describeNote the values under "CURRENT-OFFSET" and "LOG-END-OFFSET". "CURRENT-OFFSET" is the offset where this consumer group is currently at in each of the partitions.
| # | |
| # required downloading source of cmakem protobuf, gsasl as well as hdfs3 | |
| # | |
| mv ~/Downloads/cmake-3.9.0-rc5.tar.gz . | |
| tar -zxf cmake-3.9.0-rc5.tar.gz | |
| cd cmake-3.9.0-rc5 | |
| ./bootstrap && make | |
| sudo make install |
| #!/usr/bin/env python | |
| # -*- encoding: utf-8 -*- | |
| ## in order to cleanly shut down a node with running jobs, the node needs to be | |
| ## drained, and then we need to wait for allocations to be migrated away. in | |
| ## this script, we: | |
| ## * set up a watch for node-update evals for the current node | |
| ## * wait for allocations currently running to complete | |
| ## * wait for allocations from the watched evals to start running | |
| ## |
I've had many people ask me questions about OpenTracing, often in relation to OpenZipkin. I've seen assertions about how it is vendor neutral and is the lock-in cure. This post is not a sanctioned, polished or otherwise muted view, rather what I personally think about what it is and is not, and what it helps and does not help with. Scroll to the very end if this is too long. Feel free to add a comment if I made any factual mistakes or you just want to add a comment.
OpenTracing is documentation and library interfaces for distributed tracing instrumentation. To be "OpenTracing" requires bundling its interfaces in your work, so that others can use it to time distributed operations with the same library.
OpenTracing interfaces are targeted to authors of instrumentation libraries, and those who want to collaborate with traces created by them. Ex something started a trace somewhere and I add a notable event to that trace. Structure logging was recently added to O
| #!/bin/bash | |
| set -o nounset | |
| function log() { | |
| echo | |
| echo "=========================================================================" | |
| echo "== $@" | |
| echo "==" | |
| } |
| # get existing queries results | |
| users = get_query_result(132) # this one has {id, name} | |
| events_by_users = get_query_result(131) # this one has {user_id, events count} | |
| # actual merging. can be replaced with helper function and/or some Pandas code | |
| events_dict = {} | |
| for row in events_by_users['rows']: | |
| events_dict[row['user_id']] = row['count'] | |
| for row in users['rows']: |