I think abstract types are a brittle solution. The "can of worms" I alluded to is something like this: Library TensorFlow implements model "nn" and Library PyTorch also implements model "nn" and they both want to override "fit" to handle the new type "nn"... Good luck combining them in the same codebase. This problem is less pronounced in OOP where each development team controls their own method. Julia devs can solve this by having every developer of every "fit" function and every developer of every "model" struct agree beforehand on a common abstraction, but that's an expensive, brittle solution that hurts innovation velocity.
I think the closest I can do in Julia via pure structs is for the developer to define and expose their preferred fit function as a variable in the struct, something like "fit = model['fit_function']; fit(model,X,Y)" but that introduces a boilerplate tax with every method I want to call (fit, predict, score, cross validate, hyperpameter search, etc). (EDIT: indeed, I think this is pretty much what MLJ is doing, having each model developer expose a struct with a "fit" and "predict" function, and using the @load macro to automate the above boilerplate to put the right version of "fit" into global state when you @load each model... but as described above, I don't like macro magic like this.)