-
Single-line comments are started with
//. Multi-line comments are started with/*and ended with*/. -
C# uses braces (
{and}) instead of indentation to organize code into blocks. If a block is a single line, the braces can be omitted. For example,
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Install script of Caffe2 and Detectron on AWS EC2 | |
| # | |
| # Tested environment: | |
| # - AMI: Deep Learning Base AMI (Ubuntu) Version 3.0 - ami-38c87440 (CUDA is already installed) | |
| # - Instance: p3.2xlarge (V100 * 1) | |
| # - Caffe2: https://github.com/caffe2/caffe2/commit/e1f614a5f8ae92f4ecb828e1d5f84d2cd1fe12bd | |
| # - Detectron: https://github.com/facebookresearch/Detectron/commit/a22302de27f9004422a96414ed4088d05c664978 | |
| # | |
| # Usage: | |
| # Launch a fresh EC2 instance, put this script on the /home/ubuntu/, and run the following command. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import tensorflow as tf | |
| import numpy as np | |
| corpus_raw = 'He is the king . The king is royal . She is the royal queen ' | |
| # convert to lower case | |
| corpus_raw = corpus_raw.lower() | |
| words = [] | |
| for word in corpus_raw.split(): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Clone the Tensorflow Serving source | |
| git clone https://github.com/tensorflow/serving | |
| cd serving && git checkout <commit_hash> | |
| # Build the docker image (time to go get yourself a coffee, maybe a meal as well, this will take a while.) | |
| docker build -t some_user_namespace/tensorflow-serving:latest -f ./serving/tensorflow_serving/tools/docker/Dockerfile.devel . | |
| # Run up the Docker container in terminal | |
| docker run -ti some_user_namespace/tensorflow-serving:latest |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| mkdir -p /work/ | |
| # Clone the source from Github | |
| cd /work/ && git clone — recurse-submodules https://github.com/tensorflow/serving | |
| # Pin the version of Tensorflow Serving and its submodule | |
| TENSOR_SERVING_COMMIT_HASH=85db9d3 | |
| TENSORFLOW_COMMIT_HASH=dbe5e17 | |
| cd /work/serving && git checkout $TENSOR_SERVING_COMMIT_HASH |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| // npm install telegraf telegraf-wit | |
| var Telegraf = require('telegraf') | |
| var TelegrafWit = require('telegraf-wit') | |
| var app = new Telegraf(process.env.BOT_TOKEN) | |
| var wit = new TelegrafWit(process.env.WIT_TOKEN) | |
| app.use(Telegraf.memorySession()) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # first: | |
| lsbom -f -l -s -pf /var/db/receipts/org.nodejs.pkg.bom | while read f; do sudo rm /usr/local/${f}; done | |
| sudo rm -rf /usr/local/lib/node /usr/local/lib/node_modules /var/db/receipts/org.nodejs.* | |
| # To recap, the best way (I've found) to completely uninstall node + npm is to do the following: | |
| # go to /usr/local/lib and delete any node and node_modules | |
| cd /usr/local/lib | |
| sudo rm -rf node* |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| /* | |
| This example uses Scala. Please see the MLlib documentation for a Java example. | |
| Try running this code in the Spark shell. It may produce different topics each time (since LDA includes some randomization), but it should give topics similar to those listed above. | |
| This example is paired with a blog post on LDA in Spark: http://databricks.com/blog | |
| Spark: http://spark.apache.org/ | |
| */ | |
| import scala.collection.mutable |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| from pyspark import SparkContext, SparkConf | |
| from pyspark.sql import HiveContext, SQLContext | |
| import pandas as pd | |
| # sc: Spark context | |
| # file_name: csv file_name | |
| # table_name: output table name | |
| # sep: csv file separator | |
| # infer_limit: pandas type inference nb rows | |
| def read_csv(sc, file_name, table_name, sep=",", infer_limit=10000): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| #!/usr/bin/env python | |
| # encoding: utf-8 | |
| """ | |
| linkedin-query.py | |
| Created by Thomas Cabrol on 2012-12-03. | |
| Customised by Rik Van Bruggen | |
| Copyright (c) 2012 dataiku. All rights reserved. | |
| Building the LinkedIn Graph |