rr Debugger — The Complete, In-Depth Guide for C++ Engineers (Record, Replay, and Reverse Debugging Mastery)
Disclaimer: ChatGPT generated document.
Here’s the most complete practical guide I can give you about rr as a C++ engineer.
rr is a Linux record-and-replay debugger. You run your program once under rr record, rr records the execution, and then you debug that exact execution later with rr replay. During replay, execution is deterministic: instruction flow, register values, memory contents, syscall results, object addresses, and process IDs are reproduced so you can restart the same failing run as many times as you need. rr integrates primarily with gdb and is especially known for making reverse execution practical. (rr-project.org)
For a C++ developer, the core value is simple: rr turns “I saw it once and now it’s gone” into “I can replay the exact failure forever until I understand it.” That is particularly powerful for crashes, memory corruption, intermittent test failures, shutdown hangs, ordering bugs, and multi-process issues. rr’s own homepage explicitly frames the tool around intermittent failures and replaying the failing execution repeatedly under a debugger until it is understood. (rr-project.org)
rr fits C++ extremely well because it preserves the things C++ debugging often depends on:
- exact object addresses
- exact thread/process history
- exact memory/register state
- exact syscall behavior
- replayable optimized/native code paths
That means you can do things that are normally painful in C++ debugging, such as:
- stop near the symptom,
- identify a corrupted field or pointer,
- put a hardware watchpoint on the exact address,
- then run backward to the write that caused the corruption. (GitHub)
This is rr’s killer workflow. It is not just “time travel debugging” as a slogan; it is practical reverse debugging on real Linux user-space programs with low enough overhead to use on real workloads. (GitHub)
At a high level, rr records a group of Linux user-space processes, captures all inputs they receive from the kernel, and records the small set of nondeterministic CPU effects it needs. Replay then reconstructs the same execution so control flow and machine state stay identical. rr does this in user space, on stock Linux systems, using hardware/OS features that make deterministic replay practical. (rr-project.org)
A few consequences matter a lot in practice:
- rr records process trees, not just one process. Forked children are recorded automatically. (GitHub)
- rr assigns each recorded action an event number. You can jump replay to a chosen event and use those event numbers to orient yourself in long traces. (GitHub)
- rr is deterministic enough that restarting replay with gdb’s
runkeeps your breakpoints/debugging state and replays the same execution again. (GitHub) - rr relies on hardware performance counters and specific kernel support, which is why CPU/kernel compatibility matters. (GitHub)
The best technical overview remains the rr paper, Engineering Record And Replay For Deployability. (GitHub)
rr is strongest when the bug is:
- hard to reproduce, but not impossible to catch while recording
- stateful, where understanding earlier writes/events matters
- native/Linux-side, where gdb-level inspection is useful
- multi-process
- timing/order related, but still reproducible under rr or rr chaos mode (rr-project.org)
Classic rr-friendly C++ bug classes include:
- use-after-free
- wild writes / heap corruption
- vtable corruption
- “value is wrong here; where did it change?”
- shutdown deadlocks/hangs
- fork/exec child misbehavior
- flaky integration tests
- sanitizer-triggered failures you want to reverse-debug (GitHub)
rr is not magic. Its biggest practical limits are:
- It is Linux-only. The project’s README lists Linux kernel requirements, and the FAQ says there are no plans for BSD/macOS support. (GitHub)
- rr forces threads onto a single core during recording, so highly parallel code can slow down a lot and some concurrency behavior may change simply because true multicore execution is removed during recording. The official usage docs explicitly say rr forces execution on a single core and that this can add slowdown when the application benefits from multicore parallelism. (GitHub)
- It is not designed to safely handle malicious code. The usage docs warn that rr intentionally opens holes in seccomp/namespace sandboxes to enable recording. (GitHub)
- Replaying requires the recorded executable image not to have changed unless you packed the trace. (GitHub)
- LLDB support exists now, but reverse-execution support is not there through LLDB. (GitHub)
As of the current upstream README, rr requires:
-
Linux kernel >= 4.7
-
either:
- an Intel CPU with Nehalem-or-later microarchitecture,
- certain AMD Zen or later CPUs,
- or certain AArch64 microarchitectures
-
VM guests are supported only if the VM virtualizes the needed hardware performance counters; VMware and KVM are known to work, Xen is called out as not working for that. (GitHub)
Recent release notes and wiki pages add important nuance:
- rr 5.9.0 says that on kernels >= 6.10, rr now works with
perf_event_paranoid=2, which is the default on most distros. Before that, older guidance often required changing it to1. (GitHub) - rr 5.8.0 added some LLDB support via
rr replay -d lldb, but LLDB can only replay forwards for now. (GitHub) - rr 5.6.0 says AArch64 support is production quality, with caveats;
LDREX/STREXare not supported and userspace must use LSE atomics. (GitHub) - AMD Zen support exists with caveats, and the Zen wiki still documents a hardware
SpecLockMapworkaround on some systems. (GitHub)
If you are on a VM, the “will rr work on my system?” check is to verify the relevant hardware performance counter produces nonzero output:
- Intel:
perf stat -e br_inst_retired.conditional true - AMD Ryzen:
perf stat -e ex_ret_cond true(GitHub)
There are three realistic paths:
- Use distro packages if they’re recent enough.
- Use the upstream .deb/.rpm packages linked from rr-project.
- Build from source if your kernel, distro, or CPU is newer than the packaged rr understands. The official build docs explicitly say latest hardware and kernel features may require building from GitHub master. (rr-project.org)
The build/install wiki lists packages for Fedora and Debian-family systems and notes supported build environments such as RHEL 9, Debian 11, Ubuntu 20.04 LTS, and Arch. (GitHub)
The minimal workflow is:
rr record ./your_program arg1 arg2
rr replayThat is the official tl;dr. The record token is even optional. rr stores the trace in a trace directory under _RR_TRACE_DIR, which defaults to the rr data directory under your home directory. (GitHub)
During rr replay, rr launches gdb and connects it to the replay server automatically. Then you debug normally:
- set breakpoints,
- step,
- continue,
- inspect memory/registers,
- and, crucially, use reverse execution. (GitHub)
In an rr-backed gdb session, run does not mean “rerun the program from scratch in a fresh nondeterministic process.” It means “restart replay of the exact same recording.” Breakpoints and debugger state survive, and the same addresses and same execution happen again. (GitHub)
That changes how you debug.
With a normal debugger, once you restart, many facts you learned become stale. With rr, your knowledge accumulates:
- the corrupted object is still at the same address,
- the same child process reaches the same point,
- the same event numbers occur again,
- the same suspicious write happens again. (rr-project.org)
For C++, this is enormously helpful.
rr supports gdb reverse commands:
reverse-continuereverse-stepreverse-nextreverse-finish(GitHub)
The classic rr pattern is:
(gdb) p obj->field
(gdb) watch -l obj->field
(gdb) reverse-continueThe rr docs explicitly recommend watch -l because without -l, gdb may reevaluate the expression through changing scopes and reverse execution can become slow or appear buggy. (GitHub)
That gives you the instruction that last changed the value. For memory corruption, this is often the shortest path from symptom to cause.
rr gives every recorded event an event number. This matters because you can:
- mark stdio with event numbers using
-M - jump replay to a given event using
rr replay -g EVENT - ask gdb where you are using
when - restart replay at a new event via
run EVENT(GitHub)
This is extremely useful when a trace is huge.
A good workflow is:
- replay once with
-Mso stdout/stderr lines show[rr PID EVENT] - find the log line near the failure
- restart replay with
-g EVENT - narrow from there with breakpoints or reverse commands (GitHub)
rr automatically records forked processes. Use:
rr psto list recorded pidsrr replay -p <pid>to attach after that process’s firstexecrr replay -f <pid>to attach right after thefork(GitHub)
You can even give -p <filename> to attach to the first exec of a particular executable name. rr also preserves recorded PIDs during replay, which makes correlating logs much easier than in ordinary debugging sessions. (GitHub)
If you work on launchers, helper processes, browser-style architectures, test harnesses, services with subprocesses, or compiler driver chains, this is a major advantage.
Useful replay entry modes include:
rr replay— latest tracerr replay path/to/tracerr replay -g EVENT— jump to event before debugger interactionrr replay -p PID— wait for process execrr replay -f PID— wait for process forkrr replay -e— start at the end of the recording or process exit, then debug backward from the crash/symptom (GitHub)
-e is especially good for postmortem-style debugging when the bug manifests only at the very end.
rr supports gdb’s:
checkpointrestartdelete checkpoint(GitHub)
That lets you set local waypoints inside replay.
Also, if you call a program function from gdb during rr replay, rr runs it in a temporary clone of program state and discards that clone afterward, so persistent mutation through debugger-called functions does not stick. That’s a subtle but important rule. (GitHub)
The official usage docs give realistic numbers:
- around 1.2x–1.4x slowdown in general
- around 1.1x–1.2x for purely CPU-bound workloads
- 4x or more for syscall-heavy loops
- plus extra slowdown if your workload normally benefits from real multicore execution, because rr records on one core (GitHub)
rr also notes that laptop CPU governors can matter a lot, sometimes up to 2x, and recommends the performance governor when you care about recording speed. (GitHub)
Newer releases have kept improving performance:
- 5.6.0 improved heavy-
RDTSCrecording - 5.7.0 improved applications with thousands of threads
- 5.9.0 relaxed the
perf_event_paranoidsetup burden on modern kernels (GitHub)
rr’s homepage highlights Chaos mode as a way to make intermittent bugs more reproducible. Robert O’Callahan’s writeup says the normal scheduler does not do this randomization, and that chaos mode is specifically for hard-to-reproduce bugs. He also says rr itself does not know whether your app “failed”; you typically run it in a shell loop until a failing trace appears, then debug that trace. (rr-project.org)
So the practical pattern is:
while true; do
rr record -h ./flaky_test && continue
break
doneThe exact flag help should come from rr record -h, but the workflow idea is official: use chaos mode when ordinary rr recording does not reproduce the flaky behavior often enough. (robert.ocallahan.org)
By default, traces can depend on files from the original system. rr’s docs warn that if the recorded executable image changes before replay, bad things can happen. (GitHub)
To make traces self-contained, use:
rr packThe trace portability wiki says rr pack:
- eliminates duplicate files,
- includes files needed for transport,
- makes it easier to move traces to another machine,
- and makes it easier to keep recordings for different software versions because the trace stops relying on hard links to your live executable files. (GitHub)
Important caveat: the destination machine still needs to support the CPU instructions/features used by the recorded program. The portability docs specifically call out CPUID and CPU feature constraints. (GitHub)
From release notes and docs, rr also has useful trace-management/tooling commands:
rr ps— list processes in a trace (GitHub)rr pack— make trace portable/self-contained (GitHub)rr ls— improve trace management, added in 5.3.0 (GitHub)rr sources,rr buildid,rr traceinfo— added in 5.3.0 to make it easier for external tools to work with traces (GitHub)rr dump— raw trace event inspection; the debugging protips page recommends it for examining specific failing events/ranges (GitHub)rr rerun— used in rr developer/debugging workflows for single-stepping a trace and dumping state, though this is more niche and less part of the basic user-facing workflow. (GitHub)
For “what does this trace actually contain?” workflows, those 5.3.0 additions are worth knowing.
rr can run inside Docker, but only if rr itself already works on the host Linux system. The official Docker wiki says to launch the container with:
--cap-add=SYS_PTRACE--security-opt seccomp=unconfined
because Docker normally drops SYS_PTRACE and blocks syscalls rr needs, including ptrace, perf_event_open, and process_vm_writev. It also warns that if you mount /tmp as tmpfs, it must be executable, e.g. --tmpfs /tmp:exec. (GitHub)
This is one of the most common practical setup gotchas.
Historically rr is a gdb-first tool. The usage docs are written around gdb, and the reverse-execution commands are gdb’s commands. (GitHub)
As of 5.8.0, rr can launch LLDB with rr replay -d lldb, but the release notes are explicit: LLDB does not expose reverse-execution commands, so replay is forward-only there for now. (GitHub)
For C++, unless you have a strong LLDB-specific reason, rr + gdb is still the mainline experience.
A useful advanced point: Robert O’Callahan documented that ASAN worked in rr and that LSAN was fixed to work too. He also notes a subtle advantage: LSAN does not work under ordinary gdb because of ptrace conflicts, but rr can emulate ptrace interactions between rr-managed threads so the leak inspection path works. (robert.ocallahan.org)
For C++ teams, that means:
- record a sanitizer build,
- trigger the ASAN/LSAN failure once,
- replay deterministically,
- then reverse-debug around the exact corrupted address or leak-related state. (robert.ocallahan.org)
This matters more than many people realize.
The official docs warn that rr is not a secure sandbox boundary. If you record untrusted code that relies on seccomp or namespaces for confinement, rr intentionally weakens those barriers to make recording possible. An attacker could, in theory, detect rr and exploit those holes. (GitHub)
So:
- use rr for trusted code,
- not as a safe harness for hostile binaries.
The usage docs call out these environment variables:
TMPDIR— rr needs temp space on a filesystem that is not mountednoexecRR_TMPDIR— rr-specific temp dir_RR_TRACE_DIR— where traces are storedRR_LOG— enable rr internal module loggingRUNNING_UNDER_RR=1— rr sets this in recorded processes (GitHub)
That last one can be useful if your app/test harness wants to slightly alter behavior under rr, though you should use that sparingly.
Run to the symptom, inspect the bad field, set watch -l on the exact address or field, then reverse-continue to the write. This is the most rr-native workflow and often the fastest path to root cause. (GitHub)
Use rr replay -e, land at the end, then run backward. (GitHub)
Use rr ps, then rr replay -p PID or -f PID. (GitHub)
Loop rr record or use chaos mode until the failure happens; then debug the captured failing trace forever. (rr-project.org)
Ask for rr pack on the trace and replay it locally, assuming CPU feature compatibility. (GitHub)
The biggest real-world gotchas are:
- rr works only on supported Linux/hardware combinations. (GitHub)
- VM support depends on virtualized performance counters. (GitHub)
- older setups may need
perf_event_paranoid=1; newer kernels with rr 5.9.0 can often stay at2. (GitHub) /tmpor temp dirs mountednoexeccan break things. (GitHub)- replacing the executable or libraries after recording can invalidate replay unless the trace is packed. (GitHub)
watch -lis the right watchpoint form for reverse debugging. (GitHub)- LLDB support is forward-only. (GitHub)
- rr is not for hostile code. (GitHub)
If you are a C++ engineer on Linux, rr is one of the highest-leverage debugging tools you can learn.
The practical starter kit is:
rr record ./your_test_or_binary
rr replayThen inside gdb, learn these first:
runwhenwatch -l <expr>reverse-continuereverse-stepcheckpointrestart(GitHub)
After that, learn:
rr replay -err replay -g EVENTrr replay -p PID/-f PIDrr packrr psrr ls,rr traceinfo,rr sources,rr buildid(GitHub)
That set gets you from “basic rr user” to “I can actually solve nasty C++ bugs with this.”
If you want, I’ll turn this into a C++-focused rr playbook next: exact commands and workflows for use-after-free, heap corruption, deadlocks, flaky tests, and sanitizer failures.
GDB vs rr — A Deep Dive into Recording, Replay, and Reverse Debugging: What They Share and What Sets Them Apart
Yes. GDB itself does have reverse-execution commands, and it also has its own recording facilities. rr does not invent the reverse-* command family; those are GDB commands. GDB’s manual lists reverse-continue, reverse-step, and related reverse-execution commands, and says reverse execution can be provided either by GDB’s own recording modes or by a target that supports reverse execution directly. (Sourceware)
The short conceptual answer is:
- GDB is the debugger front end and command environment.
- rr is a separate record/replay engine that uses GDB as its main debugging UI during replay.
- So rr is not merely “just GDB”, but it also is not a totally separate debugger UI. It supplies the deterministic replay backend; GDB supplies the interactive debugger interface and standard debugger commands. rr’s own README says debugging with rr “extends gdb with very efficient reverse-execution,” and the rr site shows
rr replaylaunching GDB to debug the saved trace. (GitHub)
Think of GDB as answering: “How do I inspect and control execution?”
Think of rr as answering: “What execution am I controlling?”
With plain GDB, you are usually controlling a live process. With GDB recording enabled, you are controlling a live process plus some amount of execution history that GDB itself recorded. With rr replay, you are controlling a saved deterministic execution trace that rr recorded earlier and is now replaying for GDB. (Sourceware)
GDB has a built-in feature called Process Record and Replay. The manual says GDB can record a log of process execution and replay it later with both forward and reverse execution commands. It supports two main recording methods:
record fullrecord btrace(Sourceware)
GDB’s record full is its software record/replay mode. The manual says it allows replaying and reverse execution, and it stores an execution log that GDB can move around in. GDB also provides commands like record goto, record save, and record restore, plus settings to cap or uncap the number of recorded instructions. (Sourceware)
GDB’s record btrace is a different beast. It uses hardware-supported instruction tracing on supported Intel processors, but the manual is explicit that it does not record data. Because of that, reverse execution there is more limited: variables and registers are not generally available during reverse execution in the same way as with full recording, and the trace is stored in a ring buffer, so old history gets overwritten when the buffer fills. (Sourceware)
So if your question is simply, “Can GDB reverse-step and reverse-continue without rr?” the answer is yes. GDB can do that with its own recording support, subject to platform/mode limitations and subject to how much history has actually been recorded. (Sourceware)
rr records a Linux user-space execution to disk, then later replays that exact execution. The rr site says the recording is deterministic and that replay preserves address spaces, register contents, syscall data, object addresses, and so on. It also says you can restart replay with GDB’s run command and get the same execution again, with debugger state preserved across the restart. (RR Project)
rr also records and replays trees of processes and threads, not just a single thread in isolation. Its project page describes it as recording, replaying, and debugging “applications (trees of processes and threads),” and the rr site highlights support for multiple-process workloads, including entire containers. (GitHub)
The practical effect is that rr gives GDB a replay target that is usually much more like “a frozen, re-runnable past execution” than GDB’s own built-in recording modes. rr’s own docs emphasize that the same execution is replayed every time, including the same memory layout and same addresses, which is why workflows like “find bad value, set watchpoint, reverse-continue to the write” work so well. (RR Project)
Yes — a lot.
The most important distinction is this:
GDB gives you the reverse commands. rr gives those commands a much stronger, more durable execution history to operate on. (Sourceware)
If you only remember one sentence, remember that one.
More concretely, rr adds these major things beyond plain GDB:
rr records the execution once and saves it to disk. Later, rr replay lets GDB debug that same run over and over. The rr site explicitly says “the same execution is replayed every time,” and that the trace can be debugged after the failure has already happened. GDB’s built-in record full is centered around an execution log in the debugger session, with configurable instruction limits and optional log save/restore, but it is not presented as rr-style durable deterministic replay of a whole saved run as the primary workflow. (RR Project)
rr explicitly guarantees that memory layout and object addresses stay the same across replay runs. That is a huge deal for C++ because it makes address-based watchpoint workflows reliable across restart. GDB’s manual for built-in recording does not make rr’s kind of “same addresses every replay run” promise as a core feature. (RR Project)
rr is designed around recording applications consisting of process trees. That is a major strength for real Linux software. GDB’s manual page you and I are discussing is process-record/replay documentation, but rr’s docs are much more explicitly focused on whole application executions and replaying them later as traces. (GitHub)
rr’s site advertises “durable, compact traces that can be ported between machines.” That is a different workflow category from GDB’s in-session recording buffer. GDB does have record save/record restore, but rr is much more explicitly a trace-capture-and-replay system. (RR Project)
rr’s homepage repeatedly frames its use around capturing a hard failure once, then replaying it until it is understood. That is rr’s core product idea. GDB’s built-in recording exists, but rr is purpose-built for this exact workflow. (RR Project)
rr’s own project materials repeatedly emphasize that it is meant to provide efficient reverse execution under GDB on real applications, not just toy examples or narrow target setups. (RR Project)
There is real overlap.
From your day-to-day perspective, both can give you:
reverse-continuereverse-step- reverse-oriented source/instruction stepping
- breakpoint and watchpoint driven backward debugging
- replaying earlier execution rather than guessing what happened (Sourceware)
And in both cases, GDB is still the command language you interact with. That means many debugger skills transfer directly:
- breakpoints
- watchpoints
- stepping
- inspecting variables
- stack walking
- disassembly
- scripting / IDE integration with GDB workflows (RR Project)
So at the user-interface level, rr often feels like “GDB with superpowers,” which is exactly how many people mentally model it.
Here is the precise comparison.
This is GDB’s own software recording mode. It records enough execution history for replay and reverse execution. But the manual makes clear that the history is fundamentally an execution log managed by GDB, with an instruction-count limit that defaults to 200000 unless you raise or uncap it. When the limit is reached, GDB can stop or discard the oldest instructions to make room for newer ones. (Sourceware)
That means record full is often best thought of as a debugger-managed rolling time window, unless you explicitly configure it to be unlimited and can tolerate the memory cost. It is powerful, but it is not the same operating model as rr’s “save the whole failing run to disk and replay it tomorrow.” (Sourceware)
This records control flow, not full data state. The manual says it does not trace data, stores data in a ring buffer, and offers more limited reverse execution. It may let you recover where execution went, but not provide full variable/memory reconstruction across replay positions. That makes it quite different from rr. (Sourceware)
rr is a system-level user-space record/replay framework for Linux processes. It records kernel inputs and the nondeterministic CPU effects rr cares about, writes a durable trace to disk, then later replays that exact execution for GDB. rr’s site is explicit that replay preserves instruction-level control flow plus memory/register contents, object addresses, and syscall results. (RR Project)
That is why rr is usually much better for “I need to debug a nasty C++ bug from a real run that already happened.”
Partly yes, but that wording undersells rr.
It is true that rr uses GDB as the debugger interface during replay. You do your interactive debugging in GDB, and rr’s docs explicitly present the experience that way. (RR Project)
But rr is not “just a thin wrapper that calls GDB reverse-step.” Its core value is that it provides the recording engine, trace format, deterministic replay machinery, and replay target that make those GDB commands much more powerful in practice. Without rr, GDB’s reverse commands are limited to whatever execution history GDB itself or the target has available. With rr, GDB is driving a deterministic replay of a previously captured execution. (Sourceware)
So the right phrasing is:
rr uses GDB for the UI, but rr supplies the heavy machinery that makes replayed execution durable, deterministic, and practical for real Linux applications. (GitHub)
These are genuinely shared capabilities:
Both let you move backward through execution history instead of only forward. (Sourceware)
The command names are fundamentally GDB’s commands, not rr-specific syntax: reverse-continue, reverse-step, and so on. rr is valuable partly because those same commands become much more useful on top of rr replay. (Sourceware)
In both, you still use familiar debugger actions: breakpoints, watchpoints, stepping, stack inspection, memory inspection, disassembly. (Sourceware)
Neither tool can reverse-execute outside the history that exists. GDB says reverse execution is limited by the recorded log range; rr likewise can replay only what was recorded into the trace. (Sourceware)
These are the key differences.
- GDB
record full: software execution log inside GDB’s process-record machinery, typically bounded unless configured otherwise. (Sourceware) - GDB
record btrace: branch/instruction history, not full data history. (Sourceware) - rr: deterministic saved trace of a Linux user-space execution, with preserved memory layout, registers, syscall results, and process-tree behavior on replay. (RR Project)
- GDB: recording is fundamentally tied to the current debugging session’s log, though
record save/record restoreexist for supported methods. (Sourceware) - rr: the whole product model is trace capture to disk and later replay. (RR Project)
- GDB
record btraceis much weaker here because it does not record data and may not let you inspect variables/registers in reverse use the way you want. (Sourceware) - rr is especially strong because addresses and memory layout are reproduced exactly across replay runs, making “watch this object at this address, then reverse-continue” a first-class workflow. (RR Project)
- rr explicitly targets whole applications with trees of processes and threads. (GitHub)
- GDB process record/replay is not marketed in the same “capture an application tree and replay it later” way in the manual section we’re comparing. (Sourceware)
- GDB recording often feels like “I’m already in a live debug session; let me turn on recording.” The manual even says you first start the process with
runorstart, then userecord method. (Sourceware) - rr feels like “I captured the bad run; now I can debug that frozen past whenever I want.” (RR Project)
- GDB process record/replay supports a broader set of GNU/Linux architectures in its manual: ARM, AArch64, LoongArch, Moxie, PowerPC, PowerPC64, S/390, RISC-V, and x86. (Sourceware)
- rr is more selective because it depends on specific Linux/kernel/CPU support and is designed around the hardware features rr needs. Its docs emphasize stock Linux but also call out CPU/kernel constraints and a single-core execution model during recording. (RR Project)
For C++, the decisive question is usually not “can the debugger spell reverse-step?” The decisive question is:
Can I reliably get back to the write that corrupted this object, in the exact failing run, with the same addresses and state? (RR Project)
That is where rr tends to pull away.
GDB’s built-in facilities are real and useful. But rr’s deterministic replay model is much closer to the workflow C++ engineers want for serious native bugs: capture once, replay forever, keep the same addresses, and use ordinary GDB breakpoints/watchpoints plus reverse execution until the cause is found. (RR Project)
Use plain GDB recording when:
- you are already in GDB,
- you want a local reverse-debugging window,
- the bug is small enough that GDB’s own record history is sufficient,
- or you specifically want
record full/record btracebehavior. (Sourceware)
Use rr when:
- the failure is intermittent or expensive to reproduce,
- you want to debug a run after it already happened,
- you need deterministic replay across restarts,
- you care about stable addresses/object layout,
- or the bug spans threads/processes and you want a saved trace of the whole execution. (RR Project)
GDB and rr are not competing copies of the same thing. They overlap, but they sit at different layers.
- GDB provides the debugger interface and already has its own reverse-debugging and recording features. (Sourceware)
- rr provides a much stronger record/replay backend for Linux user-space programs and then lets you drive that replay through GDB. (GitHub)
So the best one-sentence comparison is:
GDB can do reverse debugging; rr makes reverse debugging much more practical, durable, and deterministic for real C/C++ Linux programs. (Sourceware)
If you want, I can next give you a feature-by-feature matrix of gdb record full vs gdb record btrace vs rr, specifically from a C++ debugging workflow perspective.
