1. Python style
Generally follow PEP8 Python style guide
But! Try to use Tensorflow wherever it useful (or possible...)
2. Tensors
- Operations that deal with batches may assume that the
first dimensionof a Tensor is the batch dimension.
| # Install script of Caffe2 and Detectron on AWS EC2 | |
| # | |
| # Tested environment: | |
| # - AMI: Deep Learning Base AMI (Ubuntu) Version 6.0 - ami-ce3673b6 (CUDA is already installed) | |
| # - Instance: p3.2xlarge (V100 * 1) | |
| # - Caffe2: https://github.com/pytorch/pytorch/commit/731273b8d61dfa2aa8b2909f27c8810ede103952 | |
| # - Detectron: https://github.com/facebookresearch/Detectron/commit/cd447c77c96f5752d6b37761d30bbdacc86989a2 | |
| # | |
| # Usage: | |
| # Launch a fresh EC2 instance, put this script on the /home/ubuntu/, and run the following command. |
| import tensorflow as tf | |
| import numpy as np | |
| corpus_raw = 'He is the king . The king is royal . She is the royal queen ' | |
| # convert to lower case | |
| corpus_raw = corpus_raw.lower() | |
| words = [] | |
| for word in corpus_raw.split(): |
1. Python style
Generally follow PEP8 Python style guide
But! Try to use Tensorflow wherever it useful (or possible...)
2. Tensors
first dimension of a Tensor is the batch dimension.| # Clone the Tensorflow Serving source | |
| git clone https://github.com/tensorflow/serving | |
| cd serving && git checkout <commit_hash> | |
| # Build the docker image (time to go get yourself a coffee, maybe a meal as well, this will take a while.) | |
| docker build -t some_user_namespace/tensorflow-serving:latest -f ./serving/tensorflow_serving/tools/docker/Dockerfile.devel . | |
| # Run up the Docker container in terminal | |
| docker run -ti some_user_namespace/tensorflow-serving:latest |
| mkdir -p /work/ | |
| # Clone the source from Github | |
| cd /work/ && git clone — recurse-submodules https://github.com/tensorflow/serving | |
| # Pin the version of Tensorflow Serving and its submodule | |
| TENSOR_SERVING_COMMIT_HASH=85db9d3 | |
| TENSORFLOW_COMMIT_HASH=dbe5e17 | |
| cd /work/serving && git checkout $TENSOR_SERVING_COMMIT_HASH |
| # required tensorflow 0.12 | |
| # required gensim 0.13.3+ for new api model.wv.index2word or just use model.index2word | |
| from gensim.models import Word2Vec | |
| import tensorflow as tf | |
| from tensorflow.contrib.tensorboard.plugins import projector | |
| # loading your gensim | |
| model = Word2Vec.load("YOUR-MODEL") |
| '''This script goes along the blog post | |
| "Building powerful image classification models using very little data" | |
| from blog.keras.io. | |
| It uses data that can be downloaded at: | |
| https://www.kaggle.com/c/dogs-vs-cats/data | |
| In our setup, we: | |
| - created a data/ folder | |
| - created train/ and validation/ subfolders inside data/ | |
| - created cats/ and dogs/ subfolders inside train/ and validation/ | |
| - put the cat pictures index 0-999 in data/train/cats |
| '''This script goes along the blog post | |
| "Building powerful image classification models using very little data" | |
| from blog.keras.io. | |
| It uses data that can be downloaded at: | |
| https://www.kaggle.com/c/dogs-vs-cats/data | |
| In our setup, we: | |
| - created a data/ folder | |
| - created train/ and validation/ subfolders inside data/ | |
| - created cats/ and dogs/ subfolders inside train/ and validation/ | |
| - put the cat pictures index 0-999 in data/train/cats |
| // npm install telegraf telegraf-wit | |
| var Telegraf = require('telegraf') | |
| var TelegrafWit = require('telegraf-wit') | |
| var app = new Telegraf(process.env.BOT_TOKEN) | |
| var wit = new TelegrafWit(process.env.WIT_TOKEN) | |
| app.use(Telegraf.memorySession()) |
| import org.apache.spark.ml.feature.{CountVectorizer, RegexTokenizer, StopWordsRemover} | |
| import org.apache.spark.mllib.clustering.{LDA, OnlineLDAOptimizer} | |
| import org.apache.spark.mllib.linalg.Vector | |
| import sqlContext.implicits._ | |
| val numTopics: Int = 100 | |
| val maxIterations: Int = 100 | |
| val vocabSize: Int = 10000 |