One piece I helped with is SenL, a “sensitivity level” framework for AI labs. It’s like a practical clearance system for AI assets. Not everything in a lab is equally dangerous, so you label assets by sensitivity (weights, training data, eval sets, agent tooling, deployment configs, etc.), then tie that label to concrete controls like who can access it, where it can run, what logging is required, and what monitoring / two-person rules apply.
If anyone’s curious, SL5 is here: http://sl5.org/ and the SenL framework is part of the published artifacts.