Not quite the same scheduling/locking semantics, though — sure, it's "concurrent" and packet-oriented, but it's a token ring. It's similar to the original Redis model of concurrency: just have everyone make a series of tiny requests, and then loop around / select(2) from your clients, handling each client's latest request serially, with no request queuing because all requests are synchronous/blocking for the client making them.
N logistics bots are a much closer analogue to N scheduler threads each working in parallel to 1. take a message from a priority heap (really, to take a schedulable process from a priority heap and then run it to produce messages, but same difference) and then 2. synchronously shuttle that message to its destination queue.
I suppose you could get the same semantics with N parallel train tracks, one per scheduler-thread; plus an actual scheduler priority-queue implemented in circuits. But I feel like that's a "non-idiomatic case", in that, no matter how you designed your stations or where you put them, it'd be incredibly painful to design a loader/unloader for a "bus" of parallel train tracks. Especially if all the trains arrive together. It'd be a setup the devs would take one look at and say "that's too much of a kludge, and yet the kludgeyness is necessary, and that's our fault for not including a necessary abstraction. We'll just put in that abstraction."
(The alternative would be a mod that allows trains to pass through one-another on a track. Then you could have one train-line where each train cleanly represents a scheduler thread. No idea what would happen if they tried to stop at the same station at the same time, though. Ghost trains~)