Since you lived with this for such a long stretch, I'd love your gut reaction to the specific escape hatches I'm building in to avoid the rigidity trap:
1. Arenas grow, not fixed:
Unlike stack frames, the arenas in my model can expand dynamically. So it's not "size for worst case"—it's "grow as needed, free all at once when scope ends." A request handler that processes 10 items or 10,000 items uses the same code; the arena just grows.
2. Handles for non-hierarchical references
When data genuinely needs to outlive its lexical scope or be shared across the hierarchy, you get a generational handle:
let handle = app_cache.store(expensive_result)
// handle can be passed around, stored, retrieved later
// data lives in app scope, not request scope
The handle includes a generation counter, so if the underlying scope dies, dereferencing returns None instead of use-after-free.3. Explicit clone for escape:
If you need to return data from an inner scope to an outer one, you say `clone()` and it copies to the caller's arena. Not automatic, but not forbidden either.
4. The hierarchy matches server reality:
App (config, pools, caches)
└── Worker (thread-local state)
└── Task (single request)
└── Frame (loop iteration)
For request/response workloads, this isn't an artificial constraint—it's how the work actually flows. The memory model just makes it explicit.Where I think it still gets awkward:
* Graph structures with cycles (need handles, less ergonomic than GC)
* FFI with libraries expecting malloc/free (planning an `unmanaged` escape hatch)
* Long-running mutations without periodic scope resets (working on incremental reclamation)
Do you think this might address the pain you experienced, or am I missing something? Particularly curious whether the handle mechanism would have helped with the cases where you had to hammer code into the hierarchy.