Skip to content

Instantly share code, notes, and snippets.

View kanchantewary's full-sized avatar

Kanchan Tewary kanchantewary

  • IBM
View GitHub Profile
https://medium.com/@ssola/building-microservices-with-python-part-i-5240a8dcc2fb
https://medium.com/@ryangordon210/building-python-microservices-part-i-getting-started-792fa615608
#check if docker installation is fine
docker run hello-world
docker info
docker search ubuntu
docker pull ubuntu
docker images
docker run
docker ps
docker start
docker stop
curl -sSL http://bit.ly/2ysbOFE | bash -s 1.2.1
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "/home/$USER/.docker" -R
-------------------
user@user-VirtualBox:~$ sudo curl -sSL http://bit.ly/2ysbOFE | bash -s 1.2.1
Installing hyperledger/fabric-samples repo
@kanchantewary
kanchantewary / common-linux-commands.txt
Last active April 8, 2019 04:57
common linux command reference
#to know memory usage
free -m
cat /proc/meminfo
vmstat -s
top
htop [need to be installed separately]
#to create an alias of a command, create a bash_alaises file and write the following, then source the bashrc file
alias cd-spark='cd /home/user/workarea/projects/learn-pyspark/jobs'
cd-spark
@kanchantewary
kanchantewary / gist:c74232d9e3c9fb1639423bb4e93be4cf
Last active March 13, 2019 09:43
spark certification notes
https://spark-packages.org
spark design philosophy – unified platform for all kinds of data analysis tasks
APIs – dataframe, datasets, SQL
essentially a compute engine
libraries – SQL, MLlib, stream processing, structured streaming, GraphX for graph processing
3 ways to launch shell  spark-shell, spark-sql,pyspark
modes – cluster – standalone cluster manager, mesos, yarn
local – driver and executor processes run on same machine
curl -sL https://ibm.biz/idt-installer | bash
export KUBECONFIG=/home/user/.bluemix/plugins/container-service/clusters/test-cluster-1/kube-config-mel01-test-cluster-1.yml
#view the cluster details
ibmcloud ks clusters
#list my worker nodes
kubectl get nodes
git status
git add -A
git commit -m 'messege goes here'
git push origin master
#install docker ce in ubuntu
sudo apt-get update
sudo apt-get install docker-ce
#run a docker container
sudo docker run ubuntu /bin/echo 'Hello world'
docker run --name daemon -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
#see the logs
sudo docker logs -f daemon
#start the container(daemon)
sudo docker start daemon
#from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.types import * #required to use StructType, to define schema