Our GitLab instance has a lot of projects and it’s been helpful for the users to have a set of template projects each with their own Docker image. Some of those images are many gigabytes in size, tricky env vars etc. Docker “democratized” CI for most of our scientific personnel who aren’t devs, since they can hit the Fork button and have a working CI config to base their project on.
In the ML projects, it serves mainly to package dependencies, and to ensure some basic security constraints: raw datasets are accessible read only, ensuring that if we suspect some issue with cached results (cause our inner orchestrator is Make..) we can nuke all the results and start over from scratch, sure the raw data is intact.
The models and arguments are in the CI config. No magic there, but since it’s all in the repo I’m ok with it.
This whole setup was put together for an upcoming clinical trial as steps toward ISO quality norms compliance, and I can’t share it now. I do intend to reproduce it in an open form alongside our existing software (GitHub.com/the-virtual-brain) when it’s ready.
In any case I appreciate your questions a lot: they drove me to think a little harder and see why stuff like Michelango and PyML is stuff that even we (academic/clinical) group should be using... if we can find the time to do it.