Skip to content

Instantly share code, notes, and snippets.

View EmilyBurak's full-sized avatar

Emily T. Burak EmilyBurak

View GitHub Profile
@EmilyBurak
EmilyBurak / rclone-queue-directory.sh
Created March 3, 2026 17:05
This is a cron using rclone to transfer files placed in a directory and transfer them to a remote, removing them from the local system. Good for uploading media simply, includes (mac OS) failure notifications and log rotation.
# Cron to use a directory as a transfer queue to a remote machine
# Every 1 minute, move files older than 30 seconds, display mac OS notif on fail
*/1 * * * * /opt/homebrew/bin/rclone move <QUEUE_DIR> <REMOTE> --min-age 30s --log-file=<LOG_FILE> --log-level INFO || osascript -e 'display notification "rclone upload failed" with title "Upload Error"'
# Rotate the rclone log file 1x/week
0 0 * * 0 mv <LOG_FILE> <LOG_FILE>.old
@EmilyBurak
EmilyBurak / docker-stop-by-name.sh
Last active March 20, 2026 15:39
docker ps -> grep -> awk -> xargs
# Get all docker containers matching a grep and Docker stop
# Get all and awk the column with their names to check
>> docker ps | grep "nextcloud" |awk '{print $2}'
ghcr.io/nextcloud-releases/aio-apache:latest
ghcr.io/nextcloud-releases/aio-nextcloud:latest
ghcr.io/nextcloud-releases/aio-imaginary:latest
ghcr.io/nextcloud-releases/aio-redis:latest
ghcr.io/nextcloud-releases/aio-postgresql:latest
ghcr.io/nextcloud-releases/aio-whiteboard:latest
@EmilyBurak
EmilyBurak / sam_example.py
Created March 5, 2025 23:47
AWS SAM Lambda Connecting to host.docker.internal
import json
from typing import Dict, List
from urllib import parse, request
import os
import psycopg
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
@EmilyBurak
EmilyBurak / stop_aws_backup_jobs.sh
Last active July 4, 2024 14:06
AWS Backup Vaults can't be deleted with running jobs, here's how to clear out the active jobs in one go while printing out the stopped job ids.
aws backup list-backup-jobs \
--by-state RUNNING \
--query 'BackupJobs[].BackupJobId' \
--output text | xargs -t -n1 aws backup stop-backup-job --backup-job-id | echo "All RUNNING state backup jobs stopped."
aws backup list-backup-jobs \
--by-state CREATED \
--query 'BackupJobs[].BackupJobId' \
--output text | xargs -t -n1 aws backup stop-backup-job --backup-job-id | echo "All CREATED state backup jobs stopped."
@EmilyBurak
EmilyBurak / optional_s3_transition.tf
Created June 28, 2024 15:11
An example of how to use dynamic blocks to make s3 storage class transitions optional, inspired by needing to modify child code inheriting from a parent s3 bucket module with transitions enabled by default to suppress Glacier transitions in favor of simply expiring objects.
# variables.tf
variable "transition_enabled" {
default = false
description = "Enable transition of S3 objects in a given bucket to Glacier storage class"
type = bool
}
variable "transition_to_glacier" {
default = 30
description = "The number of days to wait before transitioning an object to Glacier"
@EmilyBurak
EmilyBurak / first_weekday_month.go
Last active June 28, 2024 15:25
Golang to get the first weekday of the month for running scheduled code.
func isWeekday(date time.Time) bool {
return date.Weekday() != time.Saturday && date.Weekday() != time.Sunday
}
func getDayOfWeek(date time.Time) int {
return int(date.Weekday())
}
func getFirstWeekday(date time.Time) time.Time {
@EmilyBurak
EmilyBurak / aws-cost-difference.py
Last active June 28, 2024 15:16
Creates a matplotlib chart of AWS cost differences between the current and last month
import boto3
from datetime import datetime, timedelta
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Get the current month and the last month
current_month = datetime.now().strftime("%Y-%m")
last_month = (datetime.now() - timedelta(days=30)).strftime("%Y-%m")
@EmilyBurak
EmilyBurak / tf-ecs-td-updates.yaml
Last active June 28, 2024 15:18
terraform ECS state hacky stuff
# Problem Statement/Use Case: Terraform struggles to manage ECS Task Definitions (think k8s manifests for N containers bundled
# together) when the image used by the container(s) changes (importantly and especially via. CI/CD like Actions.)
# It will always want to update the task definition after the image change, which means destroying and recreating it.
# To avoid this disruption and downtime, this workflow snippet does some wacky stuff with Terraform import and moved blocks
# as well as editing the constituent code and making sure the repo is up to date as well as the state with the resource.
# Import block is a block of code used to reference an existing resource for generating code for it
- name: Create import block
run: echo -e 'import { \n to = aws_ecs_task_definition.foo \n id = "${{steps.get-task-def.outputs.TASK_DEF_ARN}}"\n}' > generate_import.tf
@EmilyBurak
EmilyBurak / aws_30_day_costs_lambda.py
Last active June 28, 2024 15:19
AWS Datadog Last 30 Day Cost Lambda
import boto3
from datetime import datetime, timedelta
from datadog_lambda.metric import lambda_metric
def lambda_handler(event, context):
# create iam client
iam = boto3.client("iam")
# List account aliases through the pagination interface
paginator = iam.get_paginator("list_account_aliases")
@EmilyBurak
EmilyBurak / tf_aws_imports.py
Last active June 28, 2024 15:18
Terraform import block generation for AWS Resources
# A script to, with some user help and knowledge of boto3 (or at least its documentation),
# derive Terraform import blocks from bulk AWS resources. This is very bad and funny to me in retrospect but
# maybe someone will find it useful or a jumping-off point.
# I'd maybe do this in Go with https://github.com/hashicorp/terraform-exec or something if I did it again.
import boto3
from botocore.exceptions import ClientError
import re
import os
import sys