- We build the Thread Inference Model (TIM) based on the transformer architecture, and its dedicated runtime TIMRUN.
- TIM + TIMRUN = Intelligent workflow generation, context engineering, and multi-hop tool use happens at the runtime level
- TIM + TIMRUN supports virtually unlimited reasoning enabled by context pruning, significantly improves the efficiency for long-horizon reasoning tasks
- Inference API is live at https://subconscious.dev/
- More details: https://github.com/subconscious-systems/TIMRUN