Skip to content

Instantly share code, notes, and snippets.

View marcinantkiewicz's full-sized avatar

Marcin Antkiewicz marcinantkiewicz

View GitHub Profile
@marcinantkiewicz
marcinantkiewicz / log_history.sh
Created March 19, 2026 04:21
Looking up better ways to record history in bash, I found https://github.com/barabo/advanced-shell-history and stopped working on this
# Work on this ended before the thing got much use. Misting trap and PROMPT_COMMAND was too much for my ability to troubleshoot bash
# pain comes from:
# - $? has to be recorded as first command in PROMPT_COMMAND, else it's overwritten by whaever runs first
# - trap debug runs every time, including every command listed in PROMPT_COMMAND
# - history 1 is a good way to get last command, but a pita in zsh
# - PROMPT_COMMAND probably does not capture subshells or pipe components right
# - sqlite does not like concurrent writes, this very much may cause concurrent writes
# just use https://github.com/barabo/advanced-shell-history
#
COMMAND_START_TIME=0;
@marcinantkiewicz
marcinantkiewicz / !setup.md
Last active March 18, 2026 06:10
Setup to boot RPI into a kiosk mode, serving remote grafana dashboard.

This is a setup to allow linux host to boot into a kiosk mode, and display dashboard from a grafana service. It may require some work to run on not-raspbery PI + LXDE. This is an example, it is not meant to be secure or reliable.

  1. Get the grafana-kiosk binary. There is no deb. I look up the version in releases, copy the checksum, and edit get_grafana_kiosk.sh script. It will download, verify, and make executable a binary.
  2. on my system, destkop config lives in ~/.config/lxsession/LXDE/ find there autorun and desktop.conf. See what's in autorun, you may want to merge it with the one provided here. Make sure desktop.conf has the two settings with the right values.
  3. edit start_kiosk.sh, put in credentials and grafana URL. Those credentials really want to live in something like Infisical, but that's later.
  4. I want the screen to be always on, untill I turn it off. I turn it off via a script creen_sleep.sh.

Note:

  • Mine runs on som
@marcinantkiewicz
marcinantkiewicz / background.js
Created March 10, 2026 19:18
extension to log headers, this extension will probably burn your computer down
const clacksCache = {};
const loggerURL = "https://localhost/crx_header_logger"
chrome.webRequest.onHeadersReceived.addListener(
(details) => {
const clacks = details.responseHeaders.find(
(h) => h.name.toLowerCase() === 'x-clacks-overhead'
);
if (clacks && details.tabId > 0) {
@marcinantkiewicz
marcinantkiewicz / list_images.md
Last active February 25, 2026 05:56
Create GH issue listing images used in dockerfiles in specified or all repositories in a github org.

Note:

  • Default GH token does not allow reads from other repos. I use GH App to auth the action.
  • GH search API has vicious rate limits, 3s sleep is not enough, or I am getting labelled as a bot. WTF Microsoft?
  • This will open one issue, listing all the images, in a table |repo|dockerfile|image|. It should process multi-stage dockerfiles.
  • the way it finds dockerfiles is dumb - find anything with dockerfile in name, find FROM line... works fine on my computer. I
name: List docker images
on:
  schedule:
    - cron: '0 8 * * *' # 8am utc/midnight-late night in the US
@marcinantkiewicz
marcinantkiewicz / github_ssh_to_local_authkeys.md
Created February 21, 2026 21:36
Allow ssh access to user based on their github identity

Allow user access something, run command, using their github public ssh key.
Note: using command= enables interesting security footguns

COMMAND='command="free",restrict'
GH_USER=username
KEYS=$(curl -s https://github.com/$GH_USER.keys)

{
 echo "# https://github.com/$GH_USER"
---
- name: install llm cli and plugins
hosts: localhost
connection: local
gather_facts: no
vars:
llm_plugins:
- llm-openrouter
- llm-mlx
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: ${namespace}
spec:
containers:
- image: google/cloud-sdk:slim
name: test-pod
command: ["sleep", "86400"]
# Players
# KSA - k8s service account
# GSA - GCP service account
# metadata server - runs on cluster nodes where pods with Workload Identity are dispatched, will respond to requests directed to 169.254.169.254.
# workload identity - modifies behavior of the metadata server. Transparently to the SA, it will return GCP STS tokens issued to the impersonated GCP role.
# Note: - when WI is enabled but not configured properly, the metadata server will fail (silently?) when it does not find annotation etc.
#
# request flow
# 1. pod requests credentials from the metadata server
# 2. metadata server checks if the pod is using workload identity, and identifies the KSA
# docker needs the container toolkit to be able to make nvidia drivers available in the containers and probably more.
# - you will need nvidia drivers too. https://github.com/NVIDIA/nvidia-container-toolkit
# - model directory will need some IOPS to load them, dedicated NVME is both fast and naturally limits the sprawl
# - in GPU stats you will see both (G)raphics and (C)ompute jobs. LLM-related tooling only controls the C jobs.
# -- once Ollama container is running
#
# this should produce help output
$ docker exec -it ollama ollama