Software As Plans
I began a refactoring project yesterday morning to address an architectural issue in Tug, specifically about how the project’s skills and agents interact with each other to implement the plans I give it.1 This idea had kept me up during the night, which means that by time I had finished breakfast and started writing it out, I had been thinking about what I wanted to say for several hours.
About ninety minutes after I began, I had the 1700-word The Tug Multi-Agent Coordination Problem document you can see below2. I pasted this into Claude Code and had a few rounds with the AI, producing the Conversation also included below. From these, I produced a design document. I continued on with several more rounds with Claude Code and GPT-5.3 Codex in Cursor, suing this design document to produce two plans: the first plan to implement Tugstate, a major new feature to deliver embedded SQLite state management to plan execution, and then a second plan to cut over to using it.
This process felt mostly like writing software has always felt. I spent about four hours puzzling over design choices, making tradeoffs, and deciding on what features are and are not in scope. The difference, of course, is that when the planning was done, I turned the coding over to Claude Code. It churned on the first plan for 2h 16m, and the second for 1h 45m. During the first AI coding session, I went for a long walk, and during the second, I went food shopping, wrote this post, and started to cook dinner.
The net result. Two PRs merged: +8168 -16100. The software worked for a simple case on the second try. The first attempt failed in a little trivial way that took two minutes to fix. I might have some more cleanup work to do today—actually I imagine this is likely—but that also would have been true if I had taken several more days to do this refeactor without the help of AI. This is a pace of code changes I simply could not have managed myself with one day of effort.
My point here is that, in the future, when people say they’re “writing software”, they aren’t going to mean cranking out code line by line— they’re going to mean creating and reviewing complex plans, which they then hand off to an AI to implement. For me, that future is already here.
The Tug Multi-Agent Coordination Problem
Tug has a multi-agent coordination problem.
At a high level, Tug should support taking an idea for a software change, handing it to agents, delivering the software, and repeating that loop with human oversight.
That workflow compresses into three phases:
- PLAN: make a plan for a code change
- IMPLEMENT: execute the code changes described by a plan
- MERGE: merge completed implementation work back into
main
Phases and Implications
Because MERGE is part of the workflow, git implications follow.
- [I-01] Each plan refers to one and only one
gitrepo. - [I-02] Each PLAN phase is committed on
mainwhen it completes, and must have a PLAN-REVISION that is a child of its base revision. - [I-03] Each PLAN refers to a base revision in git.
- [I-04] Each PLAN has one or more STEPS describing required work.
- [I-05] Each PLAN STEP has subsections including Depends on, Commit, References, Artifacts, Tasks, Tests, Checkpoint, and Rollback.
- [I-06] Tasks, Tests, and Checkpoint sections include CHECKLISTS to enumerate required work in detail.
- [I-07] Each IMPLEMENT phase must have an associated PLAN.
- [I-08] Each PLAN must be VALIDATED before IMPLEMENT begins, including plan/step dependency-chain resolution.
- [I-09] Each IMPLEMENT phase runs in a
git worktreebased on the same revision as its PLAN. - [I-10] Each IMPLEMENT phase must atomically fetch the next ready PLAN STEP, honoring dependencies.
- [I-11] Each IMPLEMENT phase must atomically mark PLAN STEPS complete.
- [I-12] MERGE must run only against a completed PLAN (all steps complete in IMPLEMENT).
- [I-13] Each IMPLEMENT worktree can be deleted once merged.
Phases and Skills
Each phase is orchestrated by a skill that can call subagents.
- PLAN
- Skill role: turn an idea into a validated, implementation-ready tugplan with explicit steps
- Subagents:
clarifier-agent: asks targeted questions to resolve ambiguity before draftingauthor-agent: drafts and revises the tugplan from clarified requirementscritic-agent: reviews plan quality/completeness/implementability before approval
- IMPLEMENT
- Skill role: execute the validated tugplan in a worktree and track step-by-step completion
- Subagents:
architect-agent: defines per-step strategy and expected touch scopecoder-agent: executes plan steps and writes code while monitoring driftreviewer-agent: verifies quality and conformance to the approved plancommitter-agent: performs required git commits for completed workauditor-agent: runs end-to-end quality gates (fresh build/tests)integrator-agent: pushes branch, opens PR, confirms CI/check status
- MERGE
- Skill role: mechanically integrate completed IMPLEMENT changes into
mainand clean up worktree state
- Skill role: mechanically integrate completed IMPLEMENT changes into
Phases and Requirements
It must be easy for a developer to:
- [R-01] Start a new PLAN phase at will.
- [R-02] Stop an in-progress PLAN phase at will.
- [R-03] Resume a previously started PLAN phase at will.
- [R-04] Start IMPLEMENT for a VALIDATED PLAN at will.
- [R-05] Stop an in-progress IMPLEMENT phase at will.
- [R-06] Start MERGE for a completed PLAN at will.
- [R-07] Stop an in-progress MERGE, if it has not yet completed.
- [R-08] Track PLAN/IMPLEMENT/MERGE progress in detail, including:
- which phase instances are running
- what any running subagent is doing now
- which PLAN STEP IMPLEMENT is currently executing
- checklist state per step item (
open/in-progress/completed)
- [R-09] Receive clear feedback for logically inconsistent requests, such as:
- IMPLEMENT with no VALIDATED PLAN
- MERGE for a non-completed PLAN
- re-MERGE of an already completed PLAN
Skill <=> Subagent Communication
Subagents communicate through skills (orchestrator-mediated), not direct messaging:
- [C-01]
clarifier-agent -> author-agent(via PLAN): emits structured clarification (questions,assumptions, ambiguity analysis); PLAN combines with user answers and passes forward. - [C-02]
author-agent -> critic-agent(via PLAN): returns plan metadata (plan_path, section/skeleton status); PLAN passes plan artifact for quality review. - [C-03]
critic-agent -> author-agent(revision loop via PLAN): emitsissues+recommendation(approve/revise/reject); PLAN either finalizes or loops feedback for revision. - [C-04]
architect-agent -> coder-agent(via IMPLEMENT + beads design): emits per-step strategy (approach,expected_touch_set, implementation steps, test plan, risks); IMPLEMENT persists it as step design data. - [C-05]
coder-agent -> reviewer-agent(via IMPLEMENT + beads notes): emits concrete changes + execution evidence (files_modified, build/test report, drift assessment); reviewer consumes for conformance and quality checks. - [C-06]
reviewer-agent -> coder-agent(revision loop via IMPLEMENT): emits findings (issues, conformance,recommendation); IMPLEMENT either commits or routes fixes back to coder. - [C-07]
committer-agent -> IMPLEMENT state/progress: consumes approved step context, performs commit operations (tugcode commit/git commitpaths), returns metadata for progress tracking/step closure. - [C-08]
auditor-agent -> coder-agent/integrator-agent(IMPLEMENT gate): emits post-loop quality-gate results (issues, deliverable checks,recommendation); IMPLEMENT routes fixes or publish handoff. - [C-09]
integrator-agent -> coder-agent/user completion path(IMPLEMENT gate): emits publication outputs (pr_url,ci_status, checks); IMPLEMENT either closes or loops CI fixes.
Problems in the Current Code
Beads and Skill <=> Subagent Communication
Beads URL: https://github.com/steveyegge/beads
Beads strongly prefers a central database on main. That conflicts with IMPLEMENT running in git worktrees. MERGE then struggles with conflicts because both main and worktree changed. The intended direction is to keep main free of IMPLEMENT-phase coordination churn.
Operationally, Skill <=> Subagent communication is orchestrator-mediated and structured, but currently routed through Beads fields (design, notes) with impedance mismatches. This caused a long sequence of complexity-heavy commits and filesystem workarounds:
2ba6c10Reduce orchestrator overthinking in implement07b048dMove bead-write from agents to orchestrator372c249Remove positional content args from bead CLIs4e24be1Show explicit tool calls in bead-write instructionsc23504dFix terminal hang from heredoc in agentsa5af8e4Stop init from deleting .beads directories1878990Bypass bd init worktree guard5d06f88Fix bd init failure in git worktrees55a9493Clean up stale beads-at-root artifacts76f3023Clean up stale beads-at-root artifacts4d7ec4fBeads Improvements (#43)dad5ab5Add beads improvements plan and proposal
[I-10] mostly works (open/close semantics are straightforward), but using Beads as source of truth for [I-11] failed. It does not represent full PLAN STEP checklist richness, which blocks delivery of [R-08] visibility.
It has also proven unexpectedly difficult to move information between subagents when mediated through bd/tugcode beads shell/file/heredoc paths.
Next Steps
Beads aims to be a distributed, git-backed graph issue tracker for AI agents, but this usage in Tug has not met needs.
We need to deliver:
[I-01]through[I-13][R-01]through[R-09][C-01]through[C-09]
Question: what architecture best satisfies those constraints with less operational complexity?
Conversation
After writing the content above, I began a conversation with Claude Code (Opus 4.6) to discuss it.
🧐 Me
OK. We have some design work to do. Read over @roadmap/multi-agent-coordination.md. Dig in. Read the document. Read the code. Investigate. Ask clarifying questions. Start making some proposals about what we might do to improve this project to deliver on its goals and requirements. Let’s solve this information flow problem with rock-solid solutions!
🤖 Claude
Both agents are done. I now have a thorough understanding of the entire system. Let me lay out what I’ve found and my proposals.
Diagnosis: What’s Actually Happening
After reading the roadmap, the skills, agents, Rust CLI commands, and core library, here’s my understanding of the problem.
What Beads does today
Beads serves four roles in the IMPLEMENT phase:
- Step state machine: open -> closed (via
bd close) - Dependency-aware ready queries:
bd ready --parent <root>returns steps whose deps are all closed - Inter-agent data store: architect writes to design, coder writes to notes, reviewer appends to notes
- Step identity:
bead_mappinggives each step anchor a unique bead ID
Where Beads fails
Reading the code tells the story clearly:
The temp file dance (SKILL.md:310-318): after every agent call, the orchestrator must:
- Agent writes temp file
_tmp_{bead_id}_strategy.md - Orchestrator runs
tugcode beads append-design {bead_id} --content-file {path} - Orchestrator deletes temp file
- If CLI fails, warn and continue (because “the agent’s JSON output already contains all data needed”)
That last point is damning: the beads write is acknowledged as redundant. The orchestrator already has the data in memory. It’s writing to beads for persistence, but then telling itself it’s OK if that persistence fails.
Checklist blindness (roadmap line 97): Beads tracks step open/closed, but each step has tasks, tests, and checkpoints—fine-grained checklists that Beads can’t model. There’s no way to tell the user: “Step 2: 4/7 tasks done, 2/3 tests passing, 1/1 checkpoint verified.”
Worktree friction (beads.rs:229-234): Beads bd init refuses to run in git worktrees. The code manually creates .beads/, bootstraps SQLite via bd --db <path> config set, then passes --db on every command. This was a multi-commit struggle.
Main branch pollution (roadmap line 81): Beads wants data on main. The IMPLEMENT phase works on a worktree branch. Merge creates conflicts on data that should have stayed local.
What’s actually working fine
- The plan phase: pure in-memory JSON between agents, no persistence needed
- The orchestration pattern: sequential agent spawning with persistent agent IDs works well
- The
tugcode worktree createflow: creates worktree, branch, validates plan, returns structured JSON - The
tugcode commitcommand: atomic log-rotate + git-commit + bead-close
Proposal: Replace Beads with Tugstate (v1)
I’m proposing a JSON-file-based state system that lives entirely in the worktree.
Core design
A single file: .tugtool/state.json in the worktree.
.tugtree/tugtool__my-plan-20260223/
├── .tugtool/
│ ├── state.json ← NEW: all coordination state
│ ├── tugplan-my-plan.md
│ ├── tugplan-skeleton.md
│ ├── log.md
│ └── config.toml
├── .beads/ ← REMOVED
│ └── beads.db
└── (source code)The state file tracks everything the orchestrator and user need:
{
"version": 1,
"plan_path": ".tugtool/tugplan-my-plan.md",
"phase": "implement",
"created_at": "2026-02-23T10:00:00Z",
"updated_at": "2026-02-23T10:45:00Z",
"steps": [
{
"anchor": "step-0",
"title": "Create API client",
"status": "completed",
"depends_on": [],
"started_at": "2026-02-23T10:05:00Z",
"completed_at": "2026-02-23T10:30:00Z",
"commit_hash": "abc123d",
"tasks": [
{ "text": "Create HTTP client struct", "status": "completed" },
{ "text": "Add retry logic", "status": "completed" }
],
"tests": [
{ "text": "Unit test: retry backoff", "status": "completed" }
],
"checkpoints": [
{ "text": "cargo test passes", "status": "completed" },
{ "text": "cargo clippy clean", "status": "completed" }
]
},
{
"anchor": "step-1",
"title": "Add caching layer",
"status": "in_progress",
"depends_on": ["step-0"],
"started_at": "2026-02-23T10:31:00Z",
"tasks": [
{ "text": "Implement cache store", "status": "completed" },
{ "text": "Add cache invalidation", "status": "in_progress" },
{ "text": "Wire up to API client", "status": "open" }
],
"tests": [
{ "text": "Cache hit/miss test", "status": "open" }
],
"checkpoints": [
{ "text": "cargo test passes", "status": "open" }
]
}
]
}Why this works
Single orchestrator, sequential execution. There’s one orchestrator process running agents one at a time. No concurrent writes means no need for SQLite locking or database transactions. A JSON file is perfectly atomic for this use case.
Delivers [R-08] completely. The state file tracks every checklist item at open/in_progress/completed granularity. tugcode status can render:
Step 0: Create API client [completed]
Tasks: 2/2 ████████████ 100%
Tests: 1/1 ████████████ 100%
Checks: 2/2 ████████████ 100%
Step 1: Add caching layer [in_progress]
Tasks: 1/3 ████░░░░░░░░ 33%
Tests: 0/1 ░░░░░░░░░░░░ 0%
Checks: 0/1 ░░░░░░░░░░░░ 0%Eliminates the temp file dance. The orchestrator writes state directly after parsing agent JSON output. No temp files, no CLI calls, no filesystem intermediaries.
Worktree-local, main stays clean. The state file lives in .tugtool/ inside the worktree. When the branch is merged, the state file can be dropped (or kept as a record). No conflicts.
Debuggable: cat .tugtool/state.json | jq .steps[1].tasks.
New CLI commands
tugcode state init <plan_path> # Initialize state.json from plan's steps/checklists
tugcode state show [--json] # Show progress (replaces `tugcode beads status`)
tugcode state next # Return next ready step (replaces `bd ready`)
tugcode state start <step-anchor> # Mark step in_progress
tugcode state update <step-anchor> # Update checklist items for a step
tugcode state complete <step> # Mark step completed with commit hashWhat changes in the orchestrator
┌────────────────────────────────────────────────────────────┬──────────────────────────────────────────────────────────────────┐
│ Current (Beads) │ New (Tugstate) │
├────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────┤
│ tugcode worktree create syncs beads │ tugcode worktree create calls tugcode state init │
├────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────┤
│ Returns bead_mapping, root_bead_id │ Returns all_steps, ready_steps (computed from state.json) │
├────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────┤
│ After architect: tugcode beads append-design + delete temp │ After architect: tugcode state update step-0 --architect-done │
├────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────┤
│ After coder: tugcode beads update-notes + delete temp │ After coder: tugcode state update step-0 --tasks-completed t1,t2 │
├────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────┤
│ After reviewer: tugcode beads append-notes + delete temp │ After reviewer: tugcode state update step-0 --tests-completed │
├────────────────────────────────────────────────────────────┼──────────────────────────────────────────────────────────────────┤
│ tugcode commit closes bead │ tugcode commit calls tugcode state complete step-0 │
└────────────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────┘What about inter-agent data?
The key insight: the persistent agent pattern already solves this. The architect, coder, reviewer, and committer are each spawned once and resumed. They retain full context. The coder doesn’t read the architect’s strategy from Beads; the orchestrator passes it in the resume prompt.
The state file doesn’t need to store agent outputs (approach, expected_touch_set, etc.). It stores progress state: what’s done, what’s in flight, what’s next. Agent-to-agent data flows through orchestrator memory plus the Task resume mechanism.
Migration path
- Phase 1: Add
tugcode statecommands +state.jsonschema. Keep Beads working. - Phase 2: Update implement skill to write to
state.jsonalongside Beads.tugcode statusreadsstate.json. - Phase 3: Remove Beads writes from implement skill.
tugcode worktree createstops syncing Beads. - Phase 4: Remove Beads code from
tugcodeCLI andtugtool-core.
🧐 Me
OK. This is a great analysis. I love how simple it is. The main limitation with this approach is that it isolates the IMPLEMENT phase to a single git worktree on a single machine. Does that make sense?
There would never be any possibility of [R-04]: start an IMPLEMENT phase for a VALIDATED PLAN at will at any time. There is nothing in this requirement which says ”… only if there are no other IMPLEMENT phases running for that PLAN.”
If we ever want to allow multiple IMPLEMENT phases to run at once that don’t duplicate work, we would need a solution for [I-08] and [I-09], right? There is no way in your proposal to do concurrent IMPLEMENT phases. What are your ideas for dealing with this?
🤖 Claude
You’re absolutely right, and this is the critical design question.
The Concurrency Problem
With a JSON file in the worktree, there’s no shared state. Two IMPLEMENT phases for the same plan can’t coordinate:
Orchestrator A reads state.json, sees step-0 ready, starts step-0.
Orchestrator B reads state.json, sees step-0 ready, starts step-0.
Duplicate work.
For [I-08] and [I-09] to work with concurrent IMPLEMENT phases, you need:
- Shared state visible to all orchestrators
- Atomic claim so only one orchestrator can grab a step
Where the shared state lives:
repo/
├── .tugtool/
│ ├── tugplan-foo.md
│ └── state.db ← shared across all worktrees
├── .tugtree/
│ ├── worktree-A/ ← orchestrator A (claimed step-0, step-2)
│ └── worktree-B/ ← orchestrator B (claimed step-1, step-3)Revised Proposal: Embedded SQLite
This changes the answer from JSON to SQLite—but not Beads. A self-owned, embedded SQLite database with a schema designed for Tug.
Why SQLite instead of JSON:
BEGIN EXCLUSIVEtransaction gives atomic claims- Concurrent reads are free (WAL mode)
- Still a single file (
.tugtool/state.db), still debuggable - No external binary:
rusqlitecompiles SQLite intotugcode
Why not Beads:
- Own schema, no impedance mismatch
- No
bdbinary dependency - No worktree init friction
- Checklist items are first-class rows
Schema (draft):
CREATE TABLE plans (
plan_path TEXT PRIMARY KEY,
status TEXT NOT NULL DEFAULT 'active',
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
CREATE TABLE steps (
plan_path TEXT NOT NULL REFERENCES plans(plan_path),
anchor TEXT NOT NULL,
title TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
claimed_by TEXT,
started_at TEXT,
completed_at TEXT,
commit_hash TEXT,
PRIMARY KEY (plan_path, anchor)
);
CREATE TABLE step_deps (
plan_path TEXT NOT NULL,
step_anchor TEXT NOT NULL,
depends_on TEXT NOT NULL,
PRIMARY KEY (plan_path, step_anchor, depends_on),
FOREIGN KEY (plan_path, step_anchor) REFERENCES steps(plan_path, anchor),
FOREIGN KEY (plan_path, depends_on) REFERENCES steps(plan_path, anchor)
);
CREATE TABLE checklist_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
plan_path TEXT NOT NULL,
step_anchor TEXT NOT NULL,
kind TEXT NOT NULL,
text TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'open',
updated_at TEXT,
FOREIGN KEY (plan_path, step_anchor) REFERENCES steps(plan_path, anchor)
);Atomic claim operation:
BEGIN EXCLUSIVE;
SELECT s.anchor FROM steps s
WHERE s.plan_path = ?
AND s.status = 'pending'
AND NOT EXISTS (
SELECT 1 FROM step_deps d
JOIN steps dep ON dep.plan_path = d.plan_path AND dep.anchor = d.depends_on
WHERE d.plan_path = s.plan_path
AND d.step_anchor = s.anchor
AND dep.status != 'completed'
)
ORDER BY s.anchor
LIMIT 1;
UPDATE steps
SET status = 'claimed', claimed_by = ?, started_at = ?
WHERE plan_path = ? AND anchor = ? AND status = 'pending';
COMMIT;CLI commands:
tugcode state init <plan_path>
tugcode state next <plan_path>
tugcode state start <plan> <step>
tugcode state update <plan> <step>
tugcode state complete <plan> <step>
tugcode state show [plan] [--json]What changes for worktree create:
- Call
tugcode state init <plan>if missing (idempotent) - Call
tugcode state next <plan>to claim the first ready step - Return that step info to the orchestrator
No Beads. No .beads/ directory. No bd binary.
Future multi-machine path:
- Wrap SQLite operations behind
tugcastendpoints - Keep schema and semantics
- Or swap to Postgres later if needed
Summary shift:
┌──────────────────────┬──────────────────────┬──────────────────────────┬─────────────────────────────┐
│ │ Beads (current) │ JSON (first proposal) │ SQLite (revised) │
├──────────────────────┼──────────────────────┼──────────────────────────┼─────────────────────────────┤
│ External binary │ bd required │ none │ none (rusqlite embedded) │
│ Concurrent IMPLEMENT │ partially │ impossible │ full atomic claims │
│ Checklist tracking │ no │ yes │ yes │
│ Lives in │ worktree .beads/ │ worktree .tugtool/ │ repo root .tugtool/ │
│ Main branch clean │ no │ yes │ yes (gitignore DB) │
│ Debuggable │ bd show │ cat/jq │ sqlite3 │
│ Multi-machine path │ none │ none │ tugcast wrapper or Postgres │
└──────────────────────┴──────────────────────┴──────────────────────────┴─────────────────────────────┘The core insight is right: coordination state must be shared and atomic. JSON can’t provide that, SQLite can, without Beads friction.
🧐 Me
OK. If we’re using a central database in the repo root, then:
- How do we merge and avoid all the messy merge mechanics we eliminated in
.tugtool/tugplan-beads-improvements.mdand4d7ec4f? - Why SQLite instead of Dolt? What would Dolt give us that SQLite doesn’t?
🤖 Claude
Two sharp questions.
Merge: How do we avoid the mess?
The Beads merge problem was that both main and worktree branches committed a binary SQLite DB. Git can’t merge that cleanly.
Solution: never commit the state DB.
.tugtool/state.db
.tugtool/state.db-wal
.tugtool/state.db-shmstate.db is an operational artifact, not source code.
Committed on worktree branch:
- Source code changes
- Plan file (
.tugtool/tugplan-foo.md) unchanged during IMPLEMENT - Implementation log (
.tugtool/log.md)
Not committed:
.tugtool/state.db
So merge is source-only, and conflict risk drops dramatically.
For progress reporting, the plan file remains the immutable spec; execution progress is read from tugcode state show.
Why SQLite, not Dolt?
Dolt is interesting:
- Data branching/merging
- SQL history by revision
- SQL conflict resolution
- Built-in audit history
But likely too heavy for Tug right now:
- Large external binary footprint
- External server/process lifecycle complexity
- Not embeddable in the
tugcodebinary - Overkill for Tug’s small state volume
- Introduces a second branching model (Git + Dolt)
SQLite advantages here:
- Embedded, no extra install
- No daemon/server/port/process management
- Proven local concurrency with WAL + transactional claims
- Small and inspectable
- Clear path later to
tugcast/Postgres if requirements change
Recommendation: embedded SQLite in repo root, gitignored, atomic on one machine.
🧐 Me
I hear you on Dolt, and I agree it solves a harder problem than Tug has right now.
A gitignored SQLite DB in the repo root feels like the sweet spot. For [I-08] and [I-09], I’m comfortable defining atomically as atomically on one machine for now. We don’t currently have a multi-machine/multi-developer requirement.
So next step: write a complete proposal in @roadmap/ based on @roadmap/multi-agent-coordination.md and this chat. Cover risks, and structure it so we can execute in two plans:
- Add SQLite-based mechanism while Beads infra still exists
- Use that mechanism to remove Beads dependency cleanly
That should avoid the chicken-and-egg problem and let us run two tugplans back-to-back.
Footnotes
-
I has been using beads for issue tracking. I’ve long been a fan of his prose writing of Steve Yegge, the developer of beads, but in the end, the beads feature set sounded like a better fit than it turned out to be in practice. Little impedance mismatches started piling up, and I wound up fighting with the software more than I should have. So, I decided to roll my own issue tracking solution with SQLite and some Rust code. ↩
-
I wrote most of the document the “old-fashioned” way, one word at a time, except that I asked the AI to fill in the agent descriptions under the Skill <=> Subagent Communication section. link ↩