Have you reached the point of dogfooding your own models to augment work that would normally be considered engineering?
If so, I wonder about the implications of black-box models helping to study and develop solutions for interpretability. It gives me "reflections on trusting trust" vibes.