under #rust

Compaction in `sid`

Sid has a novel compaction strategy I’ve not seen anyone else use. In short, it forms a singly linked lists of contexts and uses recursion to walk the list and invoke the ask an expert path. By stripping tools and requiring the model to answer from memory, the expert is able to answer questions about 100% of what’s in their context window.

I don’t normally ever let a coding agent go so long that it needs compaction, but I do want to be able to live out past the end of a context window and do longer trajectory work autonomously.

This presents a conundrum: I must either use compaction or I must find an alternative solution.

This blog post is about the former.

Compaction Algorithm

Storage space is cheap compared to running inference on an LLM. Therefore I choose to store every session. I even consult back to old sessions to find things I may have forgotten, but know I specified in words I want to recreate.

The insight at the core of sids compaction algorithm is that we can do this automatically. So long as context compacts before the end of the context window, the transcript can be infinitely reused for one-off, cache-friendly questions.

And because it literally is the context window from previous sessions, it can go about with an answer, ask it’s own expert recursively, or just give up and say, “I don’t know.”

State of sid

Sid’s usable, for the most part. It reminds me of the default shell experience on FreeBSD 7. It was rough, you could even cat directories accidentally, but if you knew the rough edges and the mechanics of signal handling, everything worked out.

In short, sid has the compaction algorithm implemented for manual compaction.

If you want to use sid, don’t mash ^C, and understand that ^D is not a literal EOF character. If you fully grok the context of what I’m saying, you’ll be able to interrupt sid and work with it well.