Skip to content

Instantly share code, notes, and snippets.

@lark1115
Created March 19, 2026 03:39
Show Gist options
  • Select an option

  • Save lark1115/3612313150b934d559d8d5dcb2482874 to your computer and use it in GitHub Desktop.

Select an option

Save lark1115/3612313150b934d559d8d5dcb2482874 to your computer and use it in GitHub Desktop.
session-to-r2: Claude Code のセッション JSONL を Cloudflare R2 に自動バックアップするフック
#!/bin/bash
# Install the optimized PreCompact/SessionEnd/SessionStart hooks into ~/.claude/
# Replaces individual hooks with a pipeline approach for sequential execution guarantee.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
SETTINGS="$HOME/.claude/settings.json"
# 1. Copy hook scripts
mkdir -p "$HOME/.claude/hooks"
cp "$SCRIPT_DIR/session-to-r2.sh" "$HOME/.claude/hooks/session-to-r2.sh"
cp "$SCRIPT_DIR/precompact-pipeline.sh" "$HOME/.claude/hooks/precompact-pipeline.sh"
cp "$SCRIPT_DIR/session-start-tasks.sh" "$HOME/.claude/hooks/session-start-tasks.sh"
chmod +x "$HOME/.claude/hooks/session-to-r2.sh"
chmod +x "$HOME/.claude/hooks/precompact-pipeline.sh"
chmod +x "$HOME/.claude/hooks/session-start-tasks.sh"
# Also install redact-session.sh from sops-secrets
REDACT_SRC="$SCRIPT_DIR/../../sops-secrets/scripts/redact-session.sh"
if [ -f "$REDACT_SRC" ]; then
cp "$REDACT_SRC" "$HOME/.claude/hooks/redact-session.sh"
chmod +x "$HOME/.claude/hooks/redact-session.sh"
echo "Installed redact-session.sh"
fi
# Also install shared r2-s3v4.sh (required by session-to-r2.sh and session-start-tasks.sh)
R2_SCRIPT_SRC="$SCRIPT_DIR/../../scripts/r2-s3v4.sh"
R2_SCRIPT_DEST="$HOME/.agents/scripts/r2-s3v4.sh"
if [ -f "$R2_SCRIPT_SRC" ]; then
mkdir -p "$HOME/.agents/scripts"
cp "$R2_SCRIPT_SRC" "$R2_SCRIPT_DEST"
chmod +x "$R2_SCRIPT_DEST"
echo "Installed r2-s3v4.sh → ~/.agents/scripts/"
fi
echo "Installed hook scripts → ~/.claude/hooks/"
# 2. Create required directories
mkdir -p "$HOME/.claude/session-r2-offsets"
mkdir -p "$HOME/.claude/session-r2-pending"
mkdir -p "$HOME/.claude/tmp"
# 3. Ensure settings.json exists
if [ ! -f "$SETTINGS" ]; then
echo '{}' > "$SETTINGS"
fi
# 4. Update hooks configuration
# Remove old individual hooks (handover, session-to-r2, redact) and replace with pipeline
python3 - "$SETTINGS" <<'PYUPDATE'
import json, sys
settings_path = sys.argv[1]
s = json.load(open(settings_path))
hooks = s.setdefault('hooks', {})
# --- Remove old hooks that reference handover, session-to-r2, or redact individually ---
for event in ['PreCompact', 'SessionEnd', 'SessionStart']:
if event in hooks:
new_groups = []
for group in hooks[event]:
new_hooks = []
for h in group.get('hooks', []):
cmd = h.get('command', '')
# Keep hooks that are NOT part of the old pipeline
if not any(k in cmd for k in ['pre-compact-handover', 'session-to-r2', 'redact-transcripts', 'redact-session', 'precompact-pipeline', 'session-start-tasks']):
new_hooks.append(h)
if new_hooks:
group['hooks'] = new_hooks
new_groups.append(group)
hooks[event] = new_groups
# --- Add new optimized hooks ---
# PreCompact: single pipeline script (redact → async upload)
hooks.setdefault('PreCompact', []).append({
'matcher': 'auto',
'hooks': [{
'type': 'command',
'command': 'bash ~/.claude/hooks/precompact-pipeline.sh'
}]
})
# SessionEnd: same pipeline in sync mode
hooks.setdefault('SessionEnd', []).append({
'matcher': '*',
'hooks': [{
'type': 'command',
'command': 'bash ~/.claude/hooks/precompact-pipeline.sh --sync'
}]
})
# SessionStart: background maintenance (startup and resume only, not compact)
for matcher in ['startup', 'resume']:
hooks.setdefault('SessionStart', []).append({
'matcher': matcher,
'hooks': [{
'type': 'command',
'command': 'bash ~/.claude/hooks/session-start-tasks.sh'
}]
})
json.dump(s, open(settings_path, 'w'), indent=2, ensure_ascii=False)
print(f'Updated hooks in {settings_path}')
PYUPDATE
echo ""
echo "Done! Optimized hook pipeline installed."
echo ""
echo "Hook configuration:"
echo " PreCompact: redact-session → session-to-r2 --async (pipeline)"
echo " SessionEnd: redact-session → session-to-r2 --sync (pipeline)"
echo " SessionStart: background tasks (pending retry, cleanup, full redact)"
echo ""
echo "Required environment variables:"
echo " SESSION_R2_BUCKET=<your-bucket-name> (default: claude-sessions)"
echo " R2_S3_ACCESS_KEY_ID, R2_S3_SECRET_ACCESS_KEY, R2_S3_ENDPOINT"
echo ""
echo "Removed: GitHub Gist handover hooks (replaced by R2)"
#!/bin/bash
# PreCompact/SessionEnd pipeline: redact secrets then upload to R2.
# Guarantees sequential execution: redact completes before upload starts.
# Usage:
# PreCompact: bash precompact-pipeline.sh (async upload)
# SessionEnd: bash precompact-pipeline.sh --sync (sync upload)
set -euo pipefail
export HOME="${HOME:-$(eval echo ~$(whoami))}"
# Read stdin once and replay to each stage
INPUT=$(cat)
# Stage 1: Redact secrets in current transcript
echo "$INPUT" | SOPS_AGE_KEY_FILE="${SOPS_AGE_KEY_FILE:-$HOME/.config/sops/age/keys.txt}" \
bash ~/.claude/hooks/redact-session.sh 2>&1 || true
# Stage 2: Upload to R2
if [[ "${1:-}" == "--sync" ]]; then
echo "$INPUT" | bash ~/.claude/hooks/session-to-r2.sh
else
echo "$INPUT" | bash ~/.claude/hooks/session-to-r2.sh --async
fi
#!/bin/bash
# SessionStart hook: background maintenance tasks.
# Runs on startup/resume only (not compact) to avoid re-triggering after compaction.
# All work is done in a background subshell to avoid blocking session start.
set -euo pipefail
export HOME="${HOME:-$(eval echo ~$(whoami))}"
# [rb-005] Backgrounding is handled inside the script, not via trailing & in settings.json.
_do_tasks() {
# --- Task 1: Retry pending R2 uploads ---
PENDING_DIR="$HOME/.claude/session-r2-pending"
if [ -d "$PENDING_DIR" ]; then
for manifest in "$PENDING_DIR"/*.manifest.json; do
[ -f "$manifest" ] || continue
# Parse manifest
eval "$(python3 -c "
import sys, json
m = json.load(open('$manifest'))
print('R2_KEY=%s' % m.get('r2_key', ''))
print('DELTA_PATH=%s' % m.get('delta_path', ''))
print('BUCKET=%s' % m.get('bucket', ''))
" 2>/dev/null || echo "R2_KEY= DELTA_PATH= BUCKET=")"
[ -z "$R2_KEY" ] || [ -z "$DELTA_PATH" ] || [ -z "$BUCKET" ] && continue
[ -f "$DELTA_PATH" ] || { rm -f "$manifest"; continue; }
# Load credentials if needed
if [ -z "${R2_S3_ACCESS_KEY_ID:-}" ]; then
for _sops_candidate in \
"$HOME/.agents/skills/secrets/agents.enc.env" \
; do
[ -f "$_sops_candidate" ] || continue
if command -v sops &>/dev/null; then
export SOPS_AGE_KEY_FILE="${SOPS_AGE_KEY_FILE:-$HOME/.config/sops/age/keys.txt}"
while IFS='=' read -r _k _v; do
[ -z "$_k" ] && continue
export "$_k=$_v"
done < <(sops decrypt --input-type dotenv --output-type dotenv "$_sops_candidate" 2>/dev/null)
unset _k _v
fi
break
done
unset _sops_candidate
fi
# Retry upload via r2-s3v4.sh (curl S3v4)
R2_SCRIPT="$HOME/.agents/scripts/r2-s3v4.sh"
[ -x "$R2_SCRIPT" ] || continue
[ -n "${R2_S3_ACCESS_KEY_ID:-}" ] || continue
if "$R2_SCRIPT" put "$BUCKET" "$R2_KEY" "$DELTA_PATH" >/dev/null 2>&1; then
rm -f "$DELTA_PATH" "$manifest"
echo "Retried pending upload: $R2_KEY" >&2
fi
done
fi
# --- Task 2: Clean stale offset files (>30 days) ---
OFFSET_DIR="$HOME/.claude/session-r2-offsets"
if [ -d "$OFFSET_DIR" ]; then
find "$OFFSET_DIR" -name '*.offset' -mtime +30 -delete 2>/dev/null || true
fi
# --- Task 3: Full-project redact safety net ---
REDACT_SCRIPT="$HOME/.claude/skills/sops-secrets/scripts/redact-transcripts.sh"
if [ -x "$REDACT_SCRIPT" ] || [ -f "$REDACT_SCRIPT" ]; then
SOPS_AGE_KEY_FILE="${SOPS_AGE_KEY_FILE:-$HOME/.config/sops/age/keys.txt}" \
bash "$REDACT_SCRIPT" >/dev/null 2>&1 || true
fi
}
# Run everything in background, fully detached
nohup bash -c "$(declare -f _do_tasks); _do_tasks" >/dev/null 2>&1 &
disown
#!/usr/bin/env bash
# PreCompact/SessionEnd hook: Upload session JSONL delta to Cloudflare R2.
#
# Reads hook input JSON from stdin:
# {"transcript_path": "...", "cwd": "..."}
#
# Flags:
# --async Fork upload to background (PreCompact). Parent exits immediately.
# Delta is saved to pending dir; manifest enables retry on failure.
# (default) Synchronous upload (SessionEnd). Also retries any pending uploads.
#
# Environment variables (pre-loaded via sops-secrets shell integration or set manually):
# SESSION_R2_BUCKET (required) R2 bucket name
# R2_S3_ACCESS_KEY_ID (required) R2 S3-compatible Access Key ID
# R2_S3_SECRET_ACCESS_KEY (required) R2 S3-compatible Secret Access Key
# R2_S3_ENDPOINT (required) R2 S3 endpoint URL
set -euo pipefail
ASYNC_MODE=false
if [[ "${1:-}" == "--async" ]]; then
ASYNC_MODE=true
fi
# --- Ensure HOME is set (hook env may not have it) ---
export HOME="${HOME:-$(eval echo ~$(whoami))}"
# --- Load minimal env (PATH for node/npx, env vars) ---
if [ -x "$HOME/.local/bin/mise" ]; then
eval "$("$HOME/.local/bin/mise" env -s bash 2>/dev/null)" || true
fi
# --- Load secrets from sops if not already in env ---
SCRIPT_REAL="$(readlink -f "$0" 2>/dev/null || python3 -c "import os; print(os.path.realpath('$0'))")"
SCRIPT_DIR="$(cd "$(dirname "$SCRIPT_REAL")" && pwd)"
# Resolve sops secrets: repo checkout first, then skill-relative fallback
SOPS_SECRETS=""
for _candidate in \
"${SCRIPT_DIR}/../../secrets/agents.enc.env" \
"$HOME/.agents/skills/secrets/agents.enc.env" \
; do
[ -f "$_candidate" ] && SOPS_SECRETS="$_candidate" && break
done
unset _candidate
if [ -z "${R2_S3_ACCESS_KEY_ID:-}" ] && [ -n "$SOPS_SECRETS" ] && command -v sops &>/dev/null; then
export SOPS_AGE_KEY_FILE="${SOPS_AGE_KEY_FILE:-$HOME/.config/sops/age/keys.txt}"
while IFS='=' read -r _k _v; do
[ -z "$_k" ] && continue
export "$_k=$_v"
done < <(sops decrypt --input-type dotenv --output-type dotenv "$SOPS_SECRETS" 2>/dev/null)
unset _k _v
fi
if [ -z "${R2_S3_ACCESS_KEY_ID:-}" ] || [ -z "${R2_S3_SECRET_ACCESS_KEY:-}" ] || [ -z "${R2_S3_ENDPOINT:-}" ]; then
echo "R2 S3 credentials not available (env or sops), skipping" >&2
exit 0
fi
# --- R2 upload helper (delegates to shared r2-s3v4.sh) ---
R2_SCRIPT="$(dirname "$(readlink -f "$0" 2>/dev/null || python3 -c "import os; print(os.path.realpath('$0'))")")/../../scripts/r2-s3v4.sh"
if [ ! -x "$R2_SCRIPT" ]; then
R2_SCRIPT="$HOME/.agents/scripts/r2-s3v4.sh"
fi
[ -x "$R2_SCRIPT" ] || { echo "r2-s3v4.sh not found" >&2; exit 1; }
_s3_put() {
"$R2_SCRIPT" put "$1" "$2" "$3"
}
# --- Parse hook input ---
INPUT=$(cat)
read -r TRANSCRIPT_PATH CWD <<< "$(echo "$INPUT" | python3 -c "
import sys, json
d = json.loads(sys.stdin.read())
print(d.get('transcript_path', ''), d.get('cwd', '.'))
" 2>/dev/null || echo "")"
CWD="${CWD:-.}"
if [ -z "$TRANSCRIPT_PATH" ] || [ ! -f "$TRANSCRIPT_PATH" ]; then
echo "No transcript found, skipping" >&2
exit 0
fi
export SESSION_R2_BUCKET="${SESSION_R2_BUCKET:-claude-sessions}"
if [ -z "${SESSION_R2_BUCKET:-}" ]; then
echo "SESSION_R2_BUCKET not set, skipping" >&2
exit 0
fi
# --- Derive session ID ---
SESSION_FILE=$(basename "$TRANSCRIPT_PATH" .jsonl)
SESSION_ID="${SESSION_FILE:0:12}"
# --- Derive repo info ---
REPO_PREFIX="_noproject"
if git -C "$CWD" rev-parse --is-inside-work-tree &>/dev/null 2>&1; then
REMOTE=$(git -C "$CWD" remote get-url origin 2>/dev/null || true)
if [ -n "$REMOTE" ]; then
REPO_PREFIX=$(echo "$REMOTE" | sed -E 's#.*[:/]([^/]+)/([^/.]+)(\.git)?$#\1/\2#')
fi
fi
# --- Directories ---
OFFSET_DIR="$HOME/.claude/session-r2-offsets"
PENDING_DIR="$HOME/.claude/session-r2-pending"
TMPDIR_SAFE="$HOME/.claude/tmp"
mkdir -p "$OFFSET_DIR" "$PENDING_DIR" "$TMPDIR_SAFE"
OFFSET_FILE="$OFFSET_DIR/${SESSION_FILE}.offset"
# --- [rb-001] Lock offset file to prevent race conditions ---
# Use mkdir-based lock (portable: works on macOS and Linux without flock).
# Store owning PID inside the lock dir for liveness checks.
LOCK_DIR="$OFFSET_FILE.lk"
_acquire_lock() {
if mkdir "$LOCK_DIR" 2>/dev/null; then
echo $$ > "$LOCK_DIR/pid"
return 0
fi
return 1
}
_release_lock() {
# Only release if we still own it (guard against stale-recovery stealing)
if [ -f "$LOCK_DIR/pid" ] && [ "$(cat "$LOCK_DIR/pid" 2>/dev/null)" = "$$" ]; then
rm -f "$LOCK_DIR/pid"
rmdir "$LOCK_DIR" 2>/dev/null || true
fi
}
if ! _acquire_lock; then
if [ -d "$LOCK_DIR" ]; then
# Check if owning process is still alive
OWNER_PID=$(cat "$LOCK_DIR/pid" 2>/dev/null || echo "")
if [ -n "$OWNER_PID" ] && kill -0 "$OWNER_PID" 2>/dev/null; then
echo "Another upload in progress (pid=$OWNER_PID), skipping" >&2
exit 0
fi
# Owner is dead — check age as additional safety
LOCK_AGE=$(( $(date +%s) - $(python3 -c "import os; print(int(os.stat('$LOCK_DIR').st_mtime))" 2>/dev/null || echo 0) ))
if [ "$LOCK_AGE" -gt 60 ]; then
echo "Removing stale lock (pid=$OWNER_PID, age=${LOCK_AGE}s)" >&2
# Atomically claim the stale lock via mv (rb-007: prevents TOCTOU double-acquisition)
STALE_DIR="$LOCK_DIR.stale.$$"
if mv "$LOCK_DIR" "$STALE_DIR" 2>/dev/null; then
rm -rf "$STALE_DIR"
if ! _acquire_lock; then
echo "Another upload in progress, skipping" >&2
exit 0
fi
else
echo "Another process already claimed stale lock, skipping" >&2
exit 0
fi
else
echo "Lock held by recently exited process, skipping" >&2
exit 0
fi
else
# mkdir failed but LOCK_DIR is not a directory (e.g., permission error, ENOSPC, regular file)
echo "Lock acquisition failed (not a directory collision), skipping" >&2
exit 0
fi
fi
# Default EXIT trap: release lock on any exit path.
# The sync path overrides this to also clean up TMPFILE.
trap '_release_lock' EXIT
PREV_OFFSET=0
if [ -f "$OFFSET_FILE" ]; then
PREV_OFFSET=$(cat "$OFFSET_FILE")
fi
CURRENT_SIZE=$(wc -c < "$TRANSCRIPT_PATH" | tr -d ' ')
if [ "$CURRENT_SIZE" -le "$PREV_OFFSET" ]; then
echo "No new data (offset=$PREV_OFFSET, size=$CURRENT_SIZE), skipping" >&2
exit 0
fi
DELTA_SIZE=$((CURRENT_SIZE - PREV_OFFSET))
_log() { echo "$*" >&2; echo "$*" > /dev/tty 2>/dev/null || true; }
_log "🚀 Uploading delta: offset=$PREV_OFFSET size=$DELTA_SIZE"
# --- Extract delta to temp file ---
TMPFILE=$(mktemp "$TMPDIR_SAFE/session-to-r2.XXXXXX")
mv "$TMPFILE" "${TMPFILE}.jsonl"
TMPFILE="${TMPFILE}.jsonl"
tail -c +"$((PREV_OFFSET + 1))" "$TRANSCRIPT_PATH" | head -c "$DELTA_SIZE" > "$TMPFILE"
# --- Build R2 object key ---
TIMESTAMP=$(date +"%Y%m%d_%H-%M-%S")
R2_KEY="${REPO_PREFIX}/${SESSION_ID}/${TIMESTAMP}_${PREV_OFFSET}.jsonl"
if $ASYNC_MODE; then
# --- [rb-001] Update offset BEFORE forking (reserve the range) ---
echo "$CURRENT_SIZE" > "$OFFSET_FILE"
# --- [rb-002] Move delta to pending dir (avoid parent trap deleting it) ---
DELTA_FILE="$PENDING_DIR/${SESSION_FILE}_${PREV_OFFSET}.jsonl"
mv "$TMPFILE" "$DELTA_FILE"
# Write manifest for retry
cat > "$PENDING_DIR/${SESSION_FILE}_${PREV_OFFSET}.manifest.json" << MANIFEST
{"r2_key":"$R2_KEY","delta_path":"$DELTA_FILE","bucket":"$SESSION_R2_BUCKET","prev_offset":$PREV_OFFSET,"current_size":$CURRENT_SIZE,"offset_file":"$OFFSET_FILE"}
MANIFEST
# --- [rb-003] Fork upload with full detachment ---
# Export S3 creds for the background subshell, call shared r2-s3v4.sh directly
nohup bash -c "
export R2_S3_ACCESS_KEY_ID='${R2_S3_ACCESS_KEY_ID}'
export R2_S3_SECRET_ACCESS_KEY='${R2_S3_SECRET_ACCESS_KEY}'
export R2_S3_ENDPOINT='${R2_S3_ENDPOINT}'
if '${R2_SCRIPT}' put '${SESSION_R2_BUCKET}' '${R2_KEY}' '$DELTA_FILE'; then
rm -f '$DELTA_FILE' '$PENDING_DIR/${SESSION_FILE}_${PREV_OFFSET}.manifest.json'
fi
" >/dev/null 2>&1 &
disown
_log "🚀 Async upload queued: r2://${SESSION_R2_BUCKET}/${R2_KEY} (${DELTA_SIZE} bytes)"
else
# --- Synchronous upload ---
# Combine tmpfile cleanup with lock release (rb-001: bash trap replaces, not stacks)
trap 'rm -f "$TMPFILE"; _release_lock' EXIT
if _s3_put "${SESSION_R2_BUCKET}" "${R2_KEY}" "$TMPFILE"; then
echo "$CURRENT_SIZE" > "$OFFSET_FILE"
_log "✅ Uploaded to r2://${SESSION_R2_BUCKET}/${R2_KEY} (${DELTA_SIZE} bytes)"
echo "r2://${SESSION_R2_BUCKET}/${R2_KEY}"
else
_log "❌ Upload failed"
exit 1
fi
# --- Retry any pending uploads from previous async failures ---
for manifest in "$PENDING_DIR"/*.manifest.json; do
[ -f "$manifest" ] || continue
eval "$(python3 -c "
import sys, json
m = json.load(open('$manifest'))
print('P_R2_KEY=%s' % m.get('r2_key', ''))
print('P_DELTA=%s' % m.get('delta_path', ''))
print('P_BUCKET=%s' % m.get('bucket', ''))
" 2>/dev/null || echo "P_R2_KEY= P_DELTA= P_BUCKET=")"
[ -z "$P_R2_KEY" ] || [ -z "$P_DELTA" ] || [ -z "$P_BUCKET" ] && continue
[ -f "$P_DELTA" ] || { rm -f "$manifest"; continue; }
if _s3_put "${P_BUCKET}" "${P_R2_KEY}" "$P_DELTA"; then
rm -f "$P_DELTA" "$manifest"
_log "🔄 Retried pending: $P_R2_KEY"
fi
done
fi
name session-to-r2
description セッションログ (JSONL) を Cloudflare R2 にそのまま保存する。 PreCompact/SessionEnd フックで自動実行し、バイトオフセットで差分管理して重複を防ぐ。 トリガー: フック自動実行、または手動で /session-to-r2
allowed-tools
Read
Bash

Session to R2

Claude Code のセッション JSONL トランスクリプトを Cloudflare R2 にそのまま保存する。

環境変数

変数 必須 説明
SESSION_R2_BUCKET Yes R2 バケット名
R2_S3_ACCESS_KEY_ID Yes R2 S3 Access Key ID
R2_S3_SECRET_ACCESS_KEY Yes R2 S3 Secret Access Key
R2_S3_ENDPOINT Yes R2 S3 endpoint URL

インストール

bash ~/.claude/skills/session-to-r2/scripts/install-hook.sh

インストールされるもの:

  • ~/.claude/hooks/session-to-r2.sh — セッション JSONL の差分を R2 にアップロード
  • ~/.claude/settings.json — PreCompact / SessionEnd フック登録

前提条件

  • R2_S3_ACCESS_KEY_ID, R2_S3_SECRET_ACCESS_KEY, R2_S3_ENDPOINT が環境変数にあること、 または sops-secrets で暗号化管理されていること
  • R2 バケットが作成済みであること

シークレット管理

R2 S3 クレデンシャルは sops-secrets スキルで暗号化管理することを推奨。 hook環境では環境変数が継承されない場合があるため、スクリプト内で sops decrypt による 自動復号フォールバックを備えている。

# sops-secrets にクレデンシャルを追加
sops secrets/agents.enc.env
# R2_S3_ACCESS_KEY_ID=<key_id>
# R2_S3_SECRET_ACCESS_KEY=<secret>
# R2_S3_ENDPOINT=https://<account_id>.r2.cloudflarestorage.com

動作

  1. フック発火時に transcript_path からセッション JSONL を特定
  2. オフセットファイル (~/.claude/session-r2-offsets/<session-id>.offset) で前回送信済みバイト位置を取得
  3. 前回位置以降の差分バイトだけを抽出
  4. curl + S3v4 署名で R2 にアップロード
  5. オフセットファイルを更新

R2 オブジェクトキー

<owner>/<repo>/<session-id>/<YYYYMMDD>_<HH-MM-SS>_<byte-offset>.jsonl

git リポジトリ外の場合:

_noproject/<session-id>/<YYYYMMDD>_<HH-MM-SS>_<byte-offset>.jsonl

手動実行

echo '{"transcript_path": "<path>", "cwd": "<cwd>"}' | bash ~/.claude/hooks/session-to-r2.sh

Rules

  • JSONL はフィルタせずそのまま送信する(生ログ保存)
  • 秘密情報は環境変数経由で渡し、スクリプト内にハードコードしない
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment