Skip to content

Instantly share code, notes, and snippets.

View shamangary's full-sized avatar

Tsun-Yi Yang shamangary

View GitHub Profile
@VikingPenguinYT
VikingPenguinYT / dropout_bayesian_approximation_tensorflow.py
Last active June 24, 2025 00:36
Implementing Dropout as a Bayesian Approximation in TensorFlow
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.contrib.distributions import Bernoulli
class VariationalDense:
"""Variational Dense Layer Class"""
def __init__(self, n_in, n_out, model_prob, model_lam):
self.model_prob = model_prob
@farrajota
farrajota / freeze.lua
Created September 6, 2016 14:17
freeze parameters of a layer
model.modules[1].parameters = function() return nil end -- freezes the layer when using optim
model.modules[1].accGradParameters = function() end -- overwrite this to reduce computations
@soumith
soumith / multiple_learning_rates.lua
Created May 26, 2016 21:35 — forked from farrajota/multiple_learning_rates.lua
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
@farrajota
farrajota / multiple_learning_rates.lua
Last active April 10, 2018 16:47
Example code for how to set different learning rates per layer. Note that when calling :parameters(), the weights and bias of a given layer are separate, consecutive tensors. Therefore, when calling :parameters(), a network with N layers will output a table with N*2 tensors, where the i'th and i'th+1 tensors belong to the same layer.
-- multiple learning rates per network. Optimizes two copies of a model network and checks if the optimization steps (2) and (3) produce the same weights/parameters.
require 'torch'
require 'nn'
require 'optim'
torch.setdefaulttensortype('torch.FloatTensor')
-- (1) Define a model for this example.
local model = nn.Sequential()
model:add(nn.Linear(10,20))
@vadimkantorov
vadimkantorov / loadmatconvnet.lua
Last active October 29, 2017 20:47
A routine to convert some MatConvNet layers to Torch
-- example for an AlexNet-like model
--model, unconverted = loadmatconvnet('/path/to/somemodel.mat', {
-- conv2 = {groups = 2},
-- conv4 = {groups = 2},
-- conv5 = {groups = 2},
-- fc6 = {fc_kH = 6, fc_kW = 6, type = 'nn.Linear'}, --ONE MAY NEED TO BE CAREFUL, these fc_kH, fc_kW are for 1x36 (or 36x1, don't remember) saved weights
-- fc7 = {type = 'nn.Linear'},
--})
matio = require 'matio'